Monday, March 10, 2014

HP HAVEn CTO Mundada on new ways for businesses to gain transformation from big data and new wave analysis

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Big data capabilities and advanced business analytics have now become essential to nearly any business development activity.

The benefits that enterprises can get if they can get their hands around big data analytics and apply it to business challenges are quickly being documented -- and they come as big new profits and major market advantages. Industries around the world are rapidly seeking transformational projects using big data to gain competitive advantage.

As part of the next edition of the HP Big Data Podcast Series, BriefingsDirect sat down with two HP executives to learn how these advanced analytics seekers can best accomplish their goals. The insights gleaned include how companies worldwide are best capturing myriad knowledge, gaining ever deeper analysis, and rapidly and securely making those insights available to more people on their own terms.

So join this executive-level discussion highlighting how the latest version of HP HAVEn produces new business analytics value and strategic return with Girish Mundada, Chief Technology Officer for HP HAVEn, and Dan Wood, Worldwide Solution Marketing Lead for Big Data at HP Software. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: We’re in a fascinating time because analytics and big data are now top of mind. What was once relegated to a fairly small group of data scientists and analysts as reporting tools -- and I am thinking about business intelligence (BI) -- has really now become a comprehensive capability that’s proving essential to nearly any business strategy.

What’s behind this eagerness to gain big-data capabilities and exploit analytics so broadly?

Wood: We're starting to see some very clear quantification of the value and the benefits of big data. It’s fair to say that big data is probably the hottest topic in the industry.

Wood
There’s a lot of talk across all forms of media about big data right now, but what’s happened is that credible publications like the "Harvard Business Review," for example, have started to put solid numbers around the benefits that enterprises can get if they can get their hands around big-data analytics and apply it to business challenges.

For example, Harvard Business Review is saying that, on average, data-driven organizations will be five percent more productive and six percent more profitable than their competitors.

Worth chasing after

Think about that. A six-percent distinct profitability increase would double the stock price for a lot of organizations. So there really is a prize worth chasing after.

What we’re seeing, Dana, is much more widespread interest across the organization and not just within IT. We’re seeing line-of-business leaders understanding and, in many organizations, actually starting to benefit from big-data analytics.

They’re able to analyze the call logs in a call center, better understand the clickstreams on a website, and better understand how customers are using products. All of these are ways of analyzing large amounts of data and directly tying it to specific line-of-business problems.

That’s where we are right now. Industries around the world are going through transformational projects using big data to gain competitive advantage.

Gardner: It’s interesting too, Dan, that they’re not just taking these as individual data sets and handling them individually, but increasingly businesses are combining them, and finding new relationships, and doing things that they really couldn't have done before.

Wood: Absolutely. It’s the idea of 360-degree view of their internal operations, or of their external customer trends and needs -- and it’s come from combining data sets.
This industry label of big data is perhaps not the most helpful, because it’s not just the volume of data that is the challenge and the opportunity for the business.

For example, they’re combining social media analytics on customers with the call logs into the call center, with internal systems of record around the customer relationship management (CRM) and ongoing customer transactions. It’s by combining all those insights that the real big-data opportunity reveals itself.

Gardner: And the sources for those insights and data, of course, are across almost any type of information asset. It’s not a just structured data or data that your application standard is around -- it’s getting all the data all of the time.

Wood: That’s right. In some ways, this industry label of big data is perhaps not the most helpful, because it’s not just the volume of data that is the challenge and the opportunity for the business. It’s the variety of sources, as you’ve alluded to, and also the velocity at which that data is moving.

The business needs to get hold of these multiple sources of data and immediately be able to apply the analytics, get the insights, and make the business decisions. This is why still the vast majority of that data that’s available to an enterprise remains dark.

Unused and unexploited

It’s unused and unexploited. Organizations, with their traditional analytics systems, are struggling to get the meaning and insights from all these data types that we mentioned. These include unstructured information, such as social media sentiment, voice recordings, potentially even video recordings, and the structured and semi-structured things like log files and data center data. For many organizations, getting the information quickly enough out of their CRM and enterprise resource planning (ERP) systems is a challenge as well.

Gardner: So we see that there’s a great desire to do this, and there are great returns on being able to do this well. We talked about some of the general challenges. What specifically is holding people up?

Is this an issue of cost, complexity, or skills? Why aren’t companies able to move beyond this small fraction of the available information to which they could be applying such important insight and analytics?

Wood: It’s a complexity and a skills challenge, as you mentioned. The systems they have today, Dana, typically aren’t set up to able to analyze these vast amounts of unstructured information, and also to be able to analyze the structured data at a speed needed by the organization.
Typically, the analytic systems that organizations have had aren’t able to cope with that or with that unstructured human information.

Think about the need to analyze immediately a clickstream from an online shopping application or a pay-to-use application that an organization has. That is, a rapid-scale analysis of a large amount of structured data. Typically, the analytic systems that organizations have had aren’t able to cope with that or with the unstructured human information.

This is why HP has created the HAVEn Big Data Platform, and Girish will talk in more detail about this, and how it brings together the analytics engine needed to address these issues.

Just as importantly, there’s the ecosystem around HAVEn, which includes HP experts and services and services from partners, to bring together the skills needed to turn this data collection into useful information.

And there are skills around data scientists, as well -- skills around understanding the right questions the line of business needs to be asking, and understanding actually how to visualize and represent the data.

Gardner: What were the guiding principles that you were thinking of when HAVEn was being put together?

Talking to customers

Mundada: HAVEn came together not by creating it in a dark room somewhere in the back office. It came together by talking to customers. On a regular basis, I meet with some of HP's largest customers worldwide, getting input from them. And they're telling us what their current problems are.

Mundada
Let me see if I can describe the landscape in a typical organization, and we can go from there. You'll see why we created HAVEn.

Let’s visualize four different waves of data. Back in early '60s,'70s, even part of the '80s, mainframes were the primary way to process data, and we used them for operationalizing certain parts of data processing, where data was extremely high-value. If you look at the cost of the systems, it was phenomenal.

Then came the next wave in the ‘80s, where we went into what I call client-server computing, and we already know several companies that were created in this space.

I’ve lived in Silicon Valley for almost 30 years now, and a whole bunch of new companies were born in this space. I worked for a company, Postgres, which became Illustra, then became Informix, and became IBM. If you look at that entire wave of OLTP technologies, we created data-processing technologies designed to solve basic business problems.

Application software was created: CRM, supplier relationship management (SRM), you name it. Many companies that did consulting around that were created, too. That was that second wave after the mainframe.
We’re talking about volumes that are growing exponentially. In the past, they were growing linearly.

Then came the third wave, where we took this data from all these transactional systems, brought them together to find out some basic analysis, which we now call business analytics, to find out "who is my most profitable customer, what are they buying, why are they buying," and things of that nature.

We created companies for that wave, too, and many technologies. Exadata, Teradata, Netezza, and a whole bunch of companies and applications were born in that space. That wave lasted for quite a while.

What we're seeing now is that from 2003 onward, something very fundamental has happened. At least, that’s the way I've been seeing this. If you look at the three Vs that Dan has described -- volume, velocity, and variety -- we’re talking about volumes that are growing exponentially. In the past, they were growing linearly. That creates a very different kind of requirement.

More importantly, if you look at the variety that Dan mentioned, that’s really the key driver in my mind. People are now routinely bringing in machine data, human data, and your traditional structured warehouses -- all of them together.

If you visualize a bar graph, you would see that 10 percent of the data that we now can monetize is coming from traditional sources, whereas 90 percent of the data that we need to monetize is now sitting in machine data and human data.

High velocity analytics

What we're trying to do with HAVEn is create a combined platform, where you can combine these three different data types and do very high-velocity analytics.

As a simple example, if you look at Apache Web Server logs, that data is used historically by the security people to see if anybody is breaking in. That data was being used by operational people to see if machines aren’t overloaded.

More importantly the digital marketing guys now want to look at that data to see who's coming to their website, what they’re buying, what they’re not buying, why they’re buying, and which geographies they’re coming from. Then, they want to combine all these data sets with their existing structured data to make sense out of it.

Today, it's a mess in the market. When we talk to our partners and customers, they’re saying that they have point solutions for each of these things, and if you want to combine that data, it’s really hard. That’s why we had to create HAVEn.

HAVEn is the fourth wave. HAVEn is specifically about big data, the fourth wave. If you look at HP’s portfolio, we sell products and services across each of these waves, and the fastest growing wave right now is the big-data wave. It’s growing at about 35 percent a year, according to Gartner, and that's why we're excited about it.
If you look at what’s required now to process big data in its entirety, one product no longer can do it all.

Gardner: Now we know why you created it and what it’s supposed to do. Tell us a little bit more about what’s included in HAVEn and why it is that you’ve been able combine product and platform to solve this very difficult task.

Mundada: If you look at what’s required now to process big data in its entirety, one product no longer can do it all. There is a very famous paper written by some university professors titled “One size does not fit all.” It proves that different data structures are able to solve different kinds of data problems far more efficiently.

One way to think about big data is to think of it as a pile of dirt. It’s a big pile. In that pile, there’s gold, silver, platinum, iron, and other metals you don’t even know. If the cost of mining that data is high, obviously you’re going to go after only the platinum and some known objects that you care about, because that’s all you can afford.

HAVEn is about bringing that cost of processing down to a very, very low level so you can go after more metals. That means you have to bring together a set of technologies to be able to solve this. If you look at the last three years, HP has made very significant amounts of investments in the big-data space.

Best of breed

We bought companies that were best of breed to try to solve specific problems. We bought Autonomy, Vertica, ArcSight, Fortify, TippingPoint, 3PAR Data, and Knightsbridge.

Now, we have a set of technologies to be able to combine them into a unique experience. Think of it almost like Microsoft Office. Before you had Microsoft Office, you would buy a word processor from one company, a spreadsheet from another company, and presentation software from a third company.

Let’s say you wanted to create a simple table. If you had created it in a word processor or even a spreadsheet, you couldn’t mix and match that. It was impossible to mix and match very different types.

Then, Microsoft came to the table and said, “Look, here’s a simplified solution.” If you want to create a table, go ahead and create it in PowerPoint. Or if you want to create more complicated thing, put it in Excel. Then, take that Excel and put it in PowerPoint. Or, you can put the whole thing into a Word document. That was the beauty of what Microsoft did.

We’re trying to do something similar for big data, make it very easy for people to combine all these different engines and the different data types and write simple applications on it.
Today you need to combine different sets of data techniques to solve different problems, and they have to work seamlessly.

Gardner: What beyond the products and binding them together makes HAVEn unique?

Mundada: HAVEn is really two different concepts. There’s the HAVEn data platform, which we’ll talk about now, and there’s a HAVEn ecosystem, which I’ll mention in a minute.

HAVEn means Hadoop, Autonomy, Vertica, Enterprise Security, and “n” applications. That’s the acronym. So let’s look at one of these pieces, and why we need an architecture like this.

As I said, today you need to combine different sets of data techniques to solve different problems, and they have to work seamlessly. That’s what we did with HAVEn. I’ve been with HAVEn from day zero, before the project concept started, and I can tell you why and how we added these pieces and how we’re trying to integrate them better.

If you look at Hadoop as an ecosystem part of that HAVEn, our story with Hadoop at HP is that Hadoop is an integral part of HAVEn. We see a lot of our customers and partners betting on Hadoop and we think it’s a good thing to keep Hadoop open and non-proprietary.

Leading vendors

We also today work with all leading Hadoop vendors, so we have shipping appliances as well as reference architectures for both Cloudera and Hortonworks, and we’re working now with MapR to create similar infrastructure. That’s our Hadoop’s story.

We’ve also found that our customers are saying they want some flexibility in Hadoop. Today, they may want one vendor, and tomorrow, they may decide to go to another vendor for whatever business reasons they choose. They want to know if we can provide a simple management tool that works across multiple Hadoop distributions.

As an example, we had to extend our Business Service Management (BSM) portfolio, so we can manage Hadoop, Vertica, hardware, storage, and networking all from within one environment. This is simply operationalizing it. Having a standardized set of hardware that matches multiple Hadoop distributions was another thing we had to do. There are many such enterprise-class innovations that you’ll see coming from HP.

But more than that, we also found that Hadoop is really good for certain kinds of applications today, and obviously, the community will extend that. You will see more and more innovations coming from that community and ecosystem.
It’s an analytic database, and by that, I mean the underlying algorithms are completely designed from the ground up.

Today, there are several areas where there are holes in Hadoop, or maybe they’re not as strong as commercial products. One such area that you see is SQL. The SQL phase of Hadoop is going to be one of the key differentiators across the different Hadoop packaging.

In that area, we have a technology called Vertica, which is the V part of HAVEn, and you’ll see companies like Facebook, using a combination of both Hadoop and Vertica.

The classic use case we see is that people will bring all kinds of raw data, put it into Hadoop, and do some batch processing there. Hadoop is great as a file system, a batch processing environment. But then they’ll take pieces of that data and want to do deep analytics on it, like a regression analytics, and they will put it into Vertica.

Vertica is, is an analytic database platform, and I will break up those three words. It’s a database. It looks and feels like a database. It has SQL on it, open database connectivity (ODBC), and Java database connectivity (JDBC) connectivity. You can run all kinds of tools on it, the ones you are used to, Tableau, Pentaho, and Informatica. So from that perspective it’s a regular database.

What’s different is that it’s custom built for the fourth wave. It’s an analytic database, and by that, I mean the underlying algorithms are completely designed from the ground up. Michael Stonebraker who created the key products in the first wave and the second wave -- Ingres and Postgres -- also created this at MIT from the ground up.

Data today

The intuition was that if you look at the processing of data today, it’s gone from having 10 to 20 columns per row to possibly thousands of columns. A social media company, for example, might have 10,000 pieces of information on me, and while they do processing, it’s going more linear. It’s going regression-oriented in a sense. You might say “Girish, age x, lives here, and likes y. What’s the likelihood somebody else may like it?”

It’s meant for that kind of deep analytical processing, a column-oriented structure. In those kinds of applications, this database technology tends to be magnitudes faster -- tens of times faster. That’s one example of Hadoop and Vertica, and we can talk more about other pieces Autonomy and Enterprise Security with you.

Gardner: So we see that there’s a platform that you put together. There’s an ecosystem that’s supporting that. There are these binding standards that make the ecosystem and the platform more synergistic. But other people are doing the same thing. What’s making HAVEn different? What is it about HAVEn that you think is going to be a winner in the marketplace?

Mundada: There are two different answers to it. Let me talk about how we’ve taken just not the SQL piece of Hadoop, but how we extend it with other parts of HP that are unique to HAVEn. It’s the breadth of it. Let’s see how we extend this simple combination of Hadoop and Vertica.
With Vertica, we’re able to drop in other codes that are user defined and user written.

I said it’s an analytic database platform. If you look at that platform piece of it, with Vertica, we’re able to drop in other code that are user-defined and user-written. For example, you can drop in R language routines, Java, C++, or C language routines directly into the database. Now, we’re now able to combine that richness across our portfolio.

Autonomy, which is the A part of HAVEn, is a unique technology. It's one of a kind. Some of the largest governments and some of the largest organizations in the world, such as banks and financial institutions, have this in production in what it's meant for, human information processing, which is audio, video, and text.

As an example, you could take a video stream and ask simple questions. Tell me if an object is moving from point A to point B, or tell me what’s in the object. Is it a human? Is it a car? Can you read car number plates automatically?

And you could do some really sophisticated applications. Taking a car, we have cases where police cars have video cameras mounted on the side, and as they’re driving by in a parking lot, they can take photos of the number plates and compare it to stolen cars.

Crime detection

Imagine being able to take that technology and combining it automatically, through simple SQL-like or simple REST API-like commands with SQL, with your existing data and creating very sophisticated applications to understand your customer or for crime detection and things like that?

Now let’s bring in the third of part of the puzzle, the E part, which is Enterprise Security. That’s also unique. We have an entire portfolio, both for security as well as for operations management.

If you look at enterprise security and if you look at the Gartner Magic Quadrant, HP’s product set has been in the leader space for several years in a row. They are the number one vendor in that area.

Now, think about our portfolio of ArcSight, Fortify, Tipping Point, and other ESP products. Imagine being able to take the data-collection algorithms of those, bringing it into this common platform of HAVEn, combining it with other structured and unstructured data with just simple commands. That’s something we can do uniquely.

Operations management is another area where we have hundreds of these machine logs. We can collect them, break them open into modular pieces, and create new applications. You can go look at our website, Operations Analytics, where with a simple slider, you can go back and forth in time to millions of log files as if they were structured data.
With simple SQL, we can essentially write simple queries across structured and unstructured data.

We can do that uniquely, because we have that entire collection. Our BSM portfolio has been on the market for 30 years. It’s one of the leaders. This is the HP OpenView platform and this is one of the things we can do uniquely at HP, bring all these things together.

That’s the breadth of our portfolio, but it simply doesn’t stop at this platform level. Remember, I said that there are two concepts. There is a platform, and then there is the ecosystem. Let’s look at the platform level first.

We have the whole of HAVEn. We have the connectors, and we ship these 700 connectors out of the box. With simple commands, you can bring in social-media data in every language written. You can bring in machine logs and structured logs. That’s the platform.

Let’s extend it further into the ecosystem part. The next thing that people were saying was, “We want to use something very open. We have our own visualization tools. We have our own extract, transform, load (ETL) tools that we’re used to. Can you just make them work?" And we said, "Sure.”

That’s one of the things that we’re able to do now. With simple SQL, we can essentially write simple queries across structured and unstructured data. Using Tableau Software, or any other tool that you like, we can access this data through our connectors, but, more importantly, it let’s you hook in your existing ETL tools into this -- completely transparently.

Breadth and openness

So that’s the openness of the platform, the breadth and the openness of the platform. Breadth is not just about the software platform, but it’s about HP’s strength to bring together hardware, software, and services.

Even with the platform, the HAVEn components in the middle, the connectors, and being able to match them with matching hardware, our customers are asking, “Can you give us matching hardware for Hadoop, so we don’t have to spend time setting it up?” That’s one of things that HP can uniquely do, but more importantly we have appliances for Vertica, for example, which are standardized.

If you look at the other side, our customers are also saying, “We understand that HP wants to provide us all this, but we like openness and we like other partners.” So we said, “Fine, we’ll leave this entire ecosystem open.” Our software will work with HP hardware and we can optimize, but we also commit to working on everybody else’s hardware.
If you look at our visualization, we didn’t go and force a visualization technology on you. We kept it open.

Our cloud story is that we’ll work on Amazon, as well as OpenStack. For example, if you want to build a hybrid cloud, where part of your data resides on HP or your private environment using OpenStack, that’s fine. If you want to put it in Amazon or Rackspace, no problem. We’ll help you bridge all these. These are the kinds of enterprise-cloud innovations that HP is able to do, and we’re open to this.

So to answer your question very succinctly, if there were three things I would pick where HP is different, one is our breadth of our portfolio. We have very large breadth that we've brought together.

It’s the openness of the platform. HP is known to be a very open company. If you look our Hadoop story, we have an example. We didn’t create a proprietary Hadoop. We kept it open. If you look at our virtualization, we didn’t go and force a virtualization technology on you. We kept it open.

More importantly, if there is one key thing that you want to take home from what we've done with HAVEn, it's not about feeds and not about speeds. It's about business value.

The reason we created HAVEn was to create that iPhone-like environment or Android-like environment, where the vision is that you should be able to go to a website, say you have standardized on the HAVEn platform, and then, be able to point and click and download an application.

The end part of HAVEn is really the business value of it, and that’s how we see HAVEn as unique. There is nobody else, as far as we know, that has that end-vision, where you can build the applications yourself using standard tools -- SQL, ODBC, REST API, JDBC -- or you can buy ready-made software that HP Software has created.

We have packages across service, operations, and digital marketing. Or you can go with a partner. The partner could be HP Enterprise Services, Accenture, Capgemini, or any of those big partners. That’s something unique about the HP big-data ecosystem that doesn’t exist anywhere else today.

Applications

Gardner: Applications are something that take advantage of the platform, the capabilities, the breadth and depth of the data, and information.

I wonder if you could explain a little bit more about the application side of HAVEn, perhaps through examples of what people are already doing with these applications, and how they’re using them in their business setting?

Mundada: That’s actually one of the most exciting parts of my job. As I said, I meet literally 100 customers a month. I'm traveling across the continents, and the use-cases of big data that I see are truly phenomenal. It really keeps you very motivated to keep doing more.

Let's look at a very broad level of why these things matter. Big data is not just about monetary profits. It's really about what I call extended profits. It doesn’t have to be monetary. If you look at a simple example, we have medical companies using data, using our technologies, to dramatically speed up drug discovery hundreds of times more than they were able to with Hadoop.
HAVEn isn’t about speeds and feeds. It's about really creating business value in a hurry, so you get there before your competitors can.

That translates into just saving lives. At our recent Discover show in Barcelona, we saw that a very innovative organization is using our technology to look at bio-diversity and save wildlife in the Amazon.

That’s unique, but those are like edge cases. If you look at a regular enterprise, what they want to do at a very high level falls into three categories: Applications that HP itself is building, applications that partners are building, and applications that customers themselves are building.

There are three applications I’ll mention. In terms of increasing revenue, we have a product that we ship called Digital Marketing Hub, and it combines the power of Autonomy and Vertica to analyze all of your customer analytics.
You’re able to take your call center logs, your social media feeds, your emails, your phone interactions and find out what the customer is really is saying, what they want and don't want, and then, being able to optimize that interaction with the customer to create more revenue.

More precise answers

For example, when a customer calls knowing what they want, obviously you can tell them more precise things. That’s one example.

Let's look at another example, where you want to decrease your bottom line or decrease your costs. Operational Analytics is another software product we ship. We’re able to drive down costs of debugging network troubles by 80 percent by combining all these logs from machines on a very frequent basis.

We can look at this and say. "At this second, every machine was okay. A second later, machines have gone down." I can look exactly at the incremental logs that showed up, using a simple pen like a pointer, going through SQL-like data. That’s unique.

Those are the kinds of applications we’re able to create. It's not just these two. The other thing people want is improve products and services. We have something called Service Anywhere, where as you're calling or as you're typing in commands and saying you want to find information about that, the system is able to understand the meaning of what you’re saying.

Notice that this is not keyword search. This is meaning, where it's able to go through existing case reports from customers, look at existing resolutions, and then say, “Okay, this might solve your problem automatically.”
That’s the beauty of the HAVEn platform. On the same platform, you can buy HP built applications or you can build your own.

Imagine what that impacts. Your customers are happy, because the answers are quicker. We call this ticketless ID, but more important, look at some other interesting ways of how this affects a company.

For example, I was recently in Europe. I was talking to a very large telco there, and they said, “We have something like 20,000 call-center operators who are taking calls from customers. Each call volume might take six minutes and some of them are repeat calls. That’s really our problem.”

We worked out something that roughly could save them two minutes per call. That translates to about a $100 million net saving per year. That’s really phenomenal. Those are one kind of application that HP built.

Now imagine a customer wanting to build the same application themselves. That’s the beauty of the HAVEn platform. On the same platform, you can buy HP built applications or you can build your own.

Let's look at NASCAR as an example. They did something very similar for customer analytics. They are able to -- while the race is happening -- understand audio, television channels, radio, broadcast, and social media and bring that all together as if it's one unique piece of data.

Then, they’re able to use that data in really innovative ways to further their sport and to create more promotional dollars for just not themselves, but even the participants. That’s unique -- being able to analyze mass scale human data.

Looking to the future

Gardner: Well, we've learned a lot about the market, the demand, why big data makes so much sense. There is very large undertaking by HP around HAVEn, and what it’s getting in terms of openness, platforms, breadth, and these great examples of applications. But we also need to look to the future.

What's coming next in terms of HAVEn 2.0 or HAVEn 1.5? Dan, could you update us on how things are progressing, what you have in mind for the next versions of these products and, therefore, the whole increasing as sum of the parts increases?

Wood: Dana, we've just announced HAVEn 2.0. The way Girish explained HAVEn there in terms of the platform and the ecosystem and continuous innovation now is around both of those pieces. It's really important to us to be driving the ecosystem, as well as the platform. So I’ll speak to HAVEn 2.0 and one of the feature that’s the focus in driving HP forward.

In terms of the platform, there are the analytics engines that we have. Girish mentioned they were best in class at the time that HP acquired them, and we continue to invest in R and D across Autonomy, IDOL, Vertica, and the ArcSight Logger product. We recently announced new versions of all three of those, improving the analytics capability and the usability and, just as importantly, increasing the interoperability.
At the moment, on an early-access program, we’re making the IDOL engine available to developers as a cloud-based offering.

For example, we now have integration of the ArcSight Logger with the Autonomy IDOL engine for analyzing unstructured human information. A really great use case of this is Logger was previously enabling IT to understand data movements and potential threats and the risks in the organization.

For example, if I were sending 50 percent of my email to a competitor, you could combine that capability with the unstructured information analysis in Autonomy and understand by that the information layer exactly what’s in that email, 50 percent of which is going to a competitor.

Let’s start putting that together and getting a powerful view of what an individual is doing and whether it’s a risky individual in the organization, integrating those HAVEn engines and putting more effort on integrating it into the Hadoop environment as well.

For example, we have just announced integration Hadoop connectors for Autonomy. A lot of people are saying that they’re building this data lake with Hadoop and they want to have the capability of putting some analytics into the unstructured information that exists in that Hadoop data lake. Clearly, we’ve also got integration with Vertica in the Hadoop environment as well.

The other key thing within that on the engine is IDOL OnDemand. At the moment, on an early-access program, we’re making the IDOL engine available to developers as a cloud-based offering. This is to encourage the independent developer community to take components of IDOL with that social media analytics, whether it’s video or audio recognition, and start building that into their own applications.

We believe the power of HAVEn will come from the combination of HP-provided applications and also third-party applications on top.

Early-access program

We’re facilitating that with this initial early-access program on IDOL OnDemand, and also, we’re investing in developer programs to make the whole HAVEn development platform far easier for partners and independent developers to work with.

We’ve set up a HAVEn developer website, and stay tuned for some really fun events online and physical events, where we’ll be getting the developer community together.

In terms of those applications that make the whole HAVEn ecosystem come to life, Girish has mentioned some of them that we have announced over the last few weeks. So I’ll give you a quick recap on those.

We have the Operations Analytics and Service Anywhere apps, both aimed at the CIO. And we have the Digital Marketing Hub from HP aimed at marketing leaders in the organizations. These are three applications that HP has packaged on the HAVEn platform.

And along with the HAVEn 2.0 announcement, we’re really pleased that six of the leading SI partners -- Accenture, Capgemini, Deloitte, PwC, Accenture and Wipro -- themselves have put marketing applications on top of HAVEn. And those guys have gotten fascinating mixtures of very industry-specific analytics applications and more horizontal apps based on the priorities that they’re chasing after.
We’re populating our solutions and partner solutions to facilitate the whole commerce side of those applications taking off in the market.

So we’re really excited about that and expect to see many more announcements of partner applications over the next few months.

The final piece of HAVEn 2.0 to support this whole ecosystem thing is a marketplace that we’ve launched, where we’re populating our solutions and partner solutions to facilitate the whole commerce side of those applications taking off in the market.

One-stop resource

The first place to go is hp.com/haven. That’s your one-stop resource for information on this platform, all of the engines that Girish alluded to. You can get the inspiration from some amazing customer case studies we have on there -- insights from experts like Girish and other people who are talking in depth about the individual engines.

And as you rightly say, Dana, it’s finding the right on-ramp for yourself.  You can look at the case studies we have, the use cases on big data in particular industries, and take a look at what the specific pain point you have today. That’s the hp.com/haven website, and that gives you all of that information.

You can also drill down from there, if you're a developer, and find the tools and resources that we’ve spoken about to enable you to start building apps on top of HAVEn. That’s one part.

The whole power of HP behind this HAVEn platform is in enabling, from an infrastructure and services point of view, to start building these big data analytics. A couple of key things here.

We started to build fully configured appliances around Hadoop and Vertica. So the Converged System’s team in HP has launched the ConvergedSystem 300, which enables you to have Vertica and Hadoop on a pre-configured appliance. That’s a great starting point for someone early on in the big-data analytics life cycle.
Those guys have data scientists and industry experts who can actually help customers go through the design phase for a big-data platform.

To expand on that, the Technology Services team is able to do full consulting on how to optimize the overall infrastructure from the point of view of processing, sharing, and storing this vast amount of information that all organizations are coping with today. That will then start to put in things like 3PAR storage systems and other innovations across the HP hardware business.

Another place where I see customers often needing some help to get started is in understanding exactly what the questions are that we need to be asking in terms of analytics and exactly what algorithms and analytics we need to put in place to get going. This is where the Big Data Discovery Experience Services from HP come in.

This is provided by the Enterprise Services Group (ESG). Those guys have data scientists and industry experts who can actually help customers go through the design phase for a big-data platform and than offer the HAVEn infrastructure supported by the ESG Services team.

Finally, Dana, come and see us on the road. We’ll be at HP Discover in Las Vegas June 10-12. We’re putting together several road shows and events across the main regions in Europe, the Americas, and in Asia Pacific, where we will be taking HAVEn on the road, too. Take a look at that hp.com/haven website, and details of the events will be found on there.

Key messages

Mundada: There are two key messages: big data is really important and it’s disrupting business. Your competitors are going to do it. You have a choice to either lead and do it yourself or you will be forced to follow. It’s one of those things that are disrupting industries worldwide.

Now, when you think of big data, don’t think of pieces and don’t think of piece parts. It’s not like you need a separate solution for human information, another for machine logs, and another for structured data. You almost have to think of it holistically, because there are many kinds of newer applications that I’m seeing regularly, where you have to bring all these data types together and create joint applications.

Whichever technologies that you choose and settle on, think of that Microsoft Office-like experience. You want to combine integrated solution across the entire stack and there aren’t that many available in the market today. So whoever you work with, make sure that you’re able to handle that entire piece as one giant puzzle.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Friday, March 7, 2014

Fast-changing demands on data centers drive need for uber data center infrastructure management

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Once the province of IT facilities planners, the management and automation of data centers has rapidly grown in scope and importance.

As software-driven data centers have matured and advanced to support unpredictable workloads like hybrid cloud, big data, and mobile applications, the ability to manage and operate that infrastructure efficiently has grown increasingly difficult.

At the same time, as enterprises seek to rationalize their applications and data, centralization and consolidation of data centers has made their management even more critical -- at ever larger scale and density.

So how do enterprise IT operators and planners keep their data centers from spinning out of control despite these new requirements? How can they leverage the best of converged systems and gain increased automation, as well as rapid analysis for improving efficiency?

BriefingsDirect recently posed such questions to two experts from HP Technology Services to explore how new integrated management capabilities are providing the means for better and automated data center infrastructure management (DCIM).

To learn more on how disparate data center resources can be integrated into broader enterprise management capabilities and processes, now join Aaron Carman, HP Worldwide Critical Facilities Strategy Leader, and Steve Wibrew, HP Worldwide IT Management Consulting Strategy and Portfolio Lead. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Learn more about DCIM.]

Here are some excerpts:
Gardner: What’s forcing these changes in data center management and planning and operations? What are these big new requirements? Why is it becoming so difficult?

Carman: In the past, folks were dealing with traditional types of services that were on a traditional type of IT infrastructure. Standard, monolithic-type data centers were designed one-off. In the past few years, with the emergence of cloud and hybrid service delivery, as well as some of the different solutions around convergence like converged infrastructures, the environment has become much more dynamic and complex.

Hybrid services

So, many organizations are trying to grapple with, and deal with, not only the traditional silos that are in place between facilities, IT, and the business, but also deal with how they are going to host and manage hybrid service delivery and what impact that’s going to have on their environment.

Carman
It’s not only about what the impact is going to be on rolling out new infrastructure solutions like converged infrastructures from multiple vendors, but how to increasingly provide more flexibility and services to their end users as digital services.

It's become much more complex and it's a little bit harder to manage, because there are many, separate types of tools that they use to manage these environments, and it has continued to increase.

Gardner: Steve, I suppose too that with ITIL v3 and more focus on a service-delivery model, even the very goal of IT has changed.

Wibrew: That's very true. We’re seeing a trend in the change and role of IT to the business. Previously IT was a cost center, an overhead to the business, to deliver the required services. Nowadays, IT is very much the business of an organization, and without IT, most organizations simply cease to function. So IT, its availability and performance, is a critical aspect of the success of the business.

Gardner: What about this additional factor of big data and analysis as applied to IT and IT infrastructure? We’re getting reams and reams of data that needs to be used and managed. Is that part of what you’re dealing with as well?

Wibrew
Wibrew: That’s certainly a very important part of the converged-management solution. There’s been a tremendous explosion in the amount of data, the amount of management information, that's available. If you narrow that down to the management information associated with operating management and supporting data centers from the facility to the applications, to the platforms right up to the services to the business, clearly that's a huge amount of information that’s collected or maintained on a 24×7 basis.

Making good and intelligent decisions on that is quite a challenge for many organizations. Quite often, we would be saying that people still remain in isolated silo teams without good interaction between the different teams. It's a challenge trying to draw that information together so businesses can make intelligent choices based on analytics of that end-to-end information.

Gardner: Aaron, I’ve heard that word "silo" now a few times, siloed teams, siloed infrastructure, and also siloed management of infrastructure. Are we now talking about perhaps a management of management capabilities? Is that part of your story here now?

Added burden

Carman: It is. For the most part, most organizations when faced with trying to manage these different areas, facilities IT and service delivery, have come up with their own set of run books, processes, tools, and methodologies for operating their data center.

When you put that onto an organization, it's just an added burden for them to try to get vendors to work with one another and integrate software tools and solutions. What the folks that provide these solutions have started to realize is that there needs to be an interoperability between these tools. There has never really been a single tool that could do that, except for what has just emerged in the past few years, which is DCIM.

HP really believes that DCIM is a foundational, operational tool that will, when properly integrated into an environment, become the backbone for operational data to traverse from many of the different tools that are used to operate the data center, from IT service management (ITSM), to IT infrastructure management, and the critical facilities management tools.

Gardner: I suppose yet another trend that we’re all grappling with these days is the notion of things moving to as-a-service, on-demand, or even as a cloud technology. Is that the case, too, with DCIM, that people are looking to do this as a service? Are we starting to do this across the hybrid model as well?
Today, clients have a huge amount of choice in terms of how they provision and obtain their IT.

Carman: Yes. These solution providers are looking toward how they can penetrate the market and provide services to all different sizes of organizations. Many of them are looking to a software-as-a-service (SaaS) model to provide DCIM. There has to be a very careful analysis of what type of a licensing model you're going to actually use within your environment to ensure that the type of functionality you're trying to achieve is interoperable with existing management tools. [Learn more about DCIM.]

Wibrew: Today, clients have a huge amount of choice in terms of how they provision and obtain their IT. Obviously, there are the traditional legacy environments and the converged systems and clients operate in their own cloud solutions.

Or maybe they’re even going out to external cloud providers and some interesting dynamics that really do increase the complexity of where they get services from. This needs to be baked into that converged solution around the interoperability and interfacing between multiple systems. So IT is truly a business supporting the organization and providing end-to-end services.

Organizations struggling

Carman: Most organizations are really struggling to introduce DCIM into their environment, since at this point, it’s really viewed as more as a facilities-type tool. The approach from different DCIM providers varies greatly on the functions and features they provide in their tool. Many organizations are struggling just to understand which DCIM product is best for them and how to incorporate into a long term strategy for operations management.

So the services that we brought to market address that specifically, not only from which DCIM tool will be best for their environment, but how it fits strategically into the direction they want to take from hosting their digital services in the future.

Gardner: Steve, I think we should also be careful not to limit the purview of DCIM. This is not just IT. This does include facilities, hybrid and service delivery model, management capabilities. Maybe you could help us put the proper box around DCIM. How far and why does it go or should we narrow it so that it doesn’t become deluded or confused?

Wibrew: Yeah, that’s a very good question, an important one to address. What we’ve seen is what the analysts have predicted. Now is the time, and we’re going to see huge growth in DCIM solutions over the next few years.
DCIM alone is not the end-to-end solution.

DCIM has really been the domain of the facilities team, and there’s traditionally been quite a lack of understanding of what DCIM is all about within the IT infrastructure management team. If you talk to lot of IT specialists, the awareness of DCIM is still quite limited at the moment. So they certainly need to find out more about it and understand the value that DCIM can bring to IT infrastructure management.

I understand that features and functions do vary, and the extent of what DCIM delivers will vary from one product to another. It’s very good certainly around the facilities space in terms of power, cooling, and knowing what’s out on the data center floor. It’s very good at knowing what’s in the rack and how much power and space has been used within the rack.

It’s very good at cable management, the networks, and for storage and the power cabling. The trend is that DCIM will evolve and grow more into the IT management space as well. So it’s becoming very aware of things like server infrastructure and even down to the virtual infrastructure, as well, getting into those domains.

DCIM will typically have work protectabilities for change in activity management. But DCIM alone is not the end-to-end solution, and we realized the importance of the need to integrate it with the full ITSM solutions and platform management solutions. A major focus, over the past few months, is to make sure that the DCIM solutions do integrate very well with the wider IT service-management solutions to provide that integrated end-to-end holistic management solution across the entire data-center ecosystem.

Great variation

Carman: With DCIM being a newer solution within the industry, I want to be very careful about calling folks DCIM specialists. We feel that we have a very great knowledge of the solutions out there. They vary so greatly.

It takes a collaborative team of folks within HP, as well as with the client, to truly understand what they’re trying to achieve. You could even pull it down to what types of use cases they’re trying to achieve for the organization, which tool works best and in interoperability and coordination with the other tools and processes they have.

We have a methodology framework called the Converged Management Framework that focuses on four distinct areas for a optimized solution and strategy for starting with business goals and understanding what the true key performance indicators are and what dashboards are required.

It looks at what the metrics are going to be for measuring success and couples that with understanding organizationally who is responsible for what types of services we provide as an ultimate service to our end user. Most of the time, we’re focusing on the facilities in IT organization. [Learn more about DCIM.]

Also, those need to be aligned to the process and workflows for provisioning services to the end users, supported directly by a system’s reference architecture, which is primarily made up of operational management tools and software. All those need to be supported by one another and purposefully designed, so that you can meet and achieve the goals of the business.
IT infrastructure, right up to services of a business, end to end, is very large and very, very complex.

When you don’t do that, the time it takes for you to deliver services to your end user lengthens and costs money. When you have separate tools that are not referencing single points of data, then you’re spending a lot of time rationalizing and understanding if you have the accurate data in front of you. All this boils down to not only cost but having a resilient operations, knowing that when you’re looking at a particular device or setup devices, you truly understand what it’s providing end to end to your users.

Wibrew: If you think about the possibilities in the management of facilities, the IT infrastructure, right up to services of a business, end-to-end, is very large and very, very complex. We have to break it down into small or more manageable chunks and focus on the key priorities.

Most-important priorities

So we look at the trans-organization, work with them to identify to them what their most important priorities are in terms of their converged-management solution and their journey.

It’s heavily structured around ITSM and ITIL processes, and we’ve identified some great candidates within ITIL for integration between facilities in IT. It’s really a case of working out the prioritized journey for that particular client. Probably one of the most important integrations would be to have a single view of the truth of operational data. So it would be unified asset information.

CMDBs within a configuration management system might be the very first and important integration between the two, because that’s the foundation for other follow-on services until you know what you’ve got, it’s very difficult to plan, what you need in the future in terms of infrastructure.

Another important integration that is now possible with these converged solutions is the integration of power management in terms of energy consumption between the facilities and the IT infrastructure.
These integrated solutions can be more granular, far more dynamic around energy consumption.

If you think about managing the power consumption of things like efficiency of the data center with PoE, generally speaking, in the past, that would be the domain of the facilities team. The IT infrastructure would simply be hosted in the facility.

The IT teams didn’t really care about how much power was used. But these integrated solutions can be more granular, far more dynamic around energy consumption with much more information being collected, not just at a facility level but within the racks and in the power-distribution units (PDUs), and in the blade chassis, right down to individual service.

We can now know what the energy consumption is. We can now incentivize the IT teams to take responsibility for energy management and energy consumption. This is a great way of actually reducing a client’s carbon foot print and energy consumption within the data center through these integrated solutions.

Gardner: Aaron, I suppose another important point to be clear on is that, like many services within HP Technology Services, this is not just designed for HP products. This is an ecumenical approach to whatever is installed in terms of product facility management capability. I wonder if you could explain a bit more HP’s philosophy when it comes to supporting the entire portfolio. [Learn more about DCIM.]

Carman: HP’s professional services we’re offering in this space are really agnostic to the final solution. We understand that a customer has been running their environment for years and has made investments into a lot of different operational tools over the years.

That’s a part of our analysis and methodology, to come in and understand the environment and what the client is trying to achieve. Then we put together a strategy, a roadmap of different products, that will help them achieve their goals that are interoperable.

Next level

We continue to transform them to the next level of abilities or capabilities that they are looking to achieve, especially around how they provision services and help them become, at the end, most likely a cloud-service provider to their end users, where heavy levels of automation are built in, so that they can get digital services to their end users in a much shorter period of time.

Gardner: I realize this is fairly new. It was just on Jan. 23 that HP announced some new services that include converged-management consulting, and that management framework was updated with new technical requirements. You have four new services organized with the management workshop, roadmap, design implementations, and so forth. [Learn more about DCIM.]
So this is fairly new, but Steve Wibrew, is there any instance where you’ve worked with some organization and that some of the really powerful benefits of doing this properly have shown through? Do you have any anecdotes you can recall of an organization that’s done this and maybe some interesting ways that it’s benefited them, maybe unintended consequences?

Data-center transformation

Wibrew: The starting point is to understand what’s there in the first place. I’ve been engaged with many clients where if you ask them about inventory, what’s in the data center, you get totally different answers from different groups of people within the organization. The IT team wants to put more stuff into the data center. The facilities team says, “No more space. We’re full. We can’t do that.”

I found that when you pull this data together from multiple sources and get a consistent feel of the truth, you can start to plan far more accurately and efficiently. Perhaps the lack of space in the data center is because there may be infrastructure that’s sitting there, powered on, and not being utilized by anybody.

It’s a fact that we’re redundant. I’ve had many situations where, in pulling together a consistent inventory, we can get rid of a lot of redundant equipment, allowing space for major initiatives and expansion projects. So there are some examples of the benefits of consolidated inventory and information.
DCIM is the only tool poised to become that backbone between the facilities and IT infrastructures.

Gardner: As we look a few years out at big-data requirements, hybrid cloud requirements, infrastructure KPIs for service delivery, energy, and carbon pressures? What’s the outlook in terms of doing this, and should we expect that there will be an ongoing demand, but also ongoing and improving return on investments you make, vis-à-vis these consulting services and DCIM?

Carman: Based upon a lot of the challenges that we outlined earlier in the program, we feel that in order to operate efficiently, this type of a future state operational-tools architecture is going to have to be in place, and DCIM is the only tool poised to become that backbone between the facilities and IT infrastructures.

So more-and-more, with a lot of the challenges of my compute footprint shrinking and having a different requirements that I had in the past, we’re now dealing with a storage or data explosion, where my data center is all filled up with storage files.

As these new demands from the business come down and force organizations onto new types of technology infrastructure platforms they haven’t dealt within the past, it requires them to be much more flexible when they have, in most cases, very inflexible facilities. That’s the strength of DCIM and what it can provide just in that one instance.

But more-and-more, the business is expecting digital services to almost be instant. They want to capitalize on the market at that time. They don't want to wait weeks or months for enterprise IT to provide them with a service to take advantage of a new service offering. So it's forcing folks into operating differently, and that's where converged management is poised to help these customers.

Looking to the future

Gardner: Steve, when you look into your crystal ball and think about how things will be in three to five years, what is it about DCIM rather and some of these services that you think will be most impacting?

Wibrew: I think the trend we're going to see is a far greater adoption of DCIM. It's only deployed in a small number of data centers at the moment. That's going to increase quite dramatically, and this could be a much tighter alignment between how the facilities are run and how the IT infrastructure is operated and supported. It could be far more integrated than it is today.

The roles of IT are going to change, and a lot of the work now is still around design, planning, scripting, and orchestrating. In the future, we're going to see people, almost like a conductor in an orchestra, overseeing the operations within the data center through leading highly automated and optimized processes, which are actually delivered by automated solutions.

Gardner: I benefited greatly in learning more about DCIM on the HP website. There were videos, white-papers, and blog-posts. So, there’s quite a bit of information for those interested in learning more about DCIM. HP Technology Services website was a great resource for me. [Learn more about DCIM.]
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, March 5, 2014

CIOs think proliferation of IT applications is overwhelming businesses and threatening competititve edge

The tangled web of applications within international organizations is getting more and more complex, putting strain on the IT department and stunting digital transformation. This comes from a study of over 1,000 CIOs and senior IT decision makers by Capgemini, a provider of consulting, technology, and outsourcing services.

According to the report released today, over the last three years, the number of IT decision makers who believe their business has more applications than it needs has increased from just over a third (34 percent) to nearly half (48 percent). Just 37 percent believe the majority of their applications are mission critical. Nearly three quarters (70 percent) believe at least a fifth of their company’s applications share similar functionality and could be consolidated, and a further 53 percent believe a fifth should be retired or replaced.

Apps bloat is a huge problem. Especially when the apps are non-mission critical, or context rather than core applications. We may very well be at a tipping point, because new mobile apps and more use of SaaS and clpoud apps forces a rethinking of an organization's entire applications strategy and approach. In order to modernize successfully, enterprises may need to first identify and cull out extraneous applications.
A well-rationalized applications landscape suddenly becomes a much bigger, strategic imperative for the whole company.

This isn’t just an IT problem, but a business problem. The study revealed that 60 percent of senior IT decision makers believe their department’s most valuable contribution to the company is introducing new technologies. A significant number have already implemented cloud computing (56 percent), mobility (54 percent), social (41 percent), and big data (34 percent) solutions.

However, without a modernized applications landscape, IT lacks the bandwidth to deliver competitive advantage through these technologies. Little wonder 76 percent believe rationalization is important to realizing their company’s objectives.

“On the surface, a badly organized, overloaded and out-dated applications landscape sounds like a minor irritation for the IT team, absorbing bandwidth and wasting money, but ultimately not a problem that should keep the wider business up at night,” said Ron Tolido, CTO Application Services Continental Europe at Capgemini. “But in a world where all facets of an organization are starting to embrace digital transformation -- and are dependent on the quick deployment of mobile, social, Big Data and Cloud solutions for competitive advantage -- a well-rationalized applications landscape suddenly becomes a much bigger, strategic imperative for the whole company.”

New versus old

The study also contains evidence that, while Western organizations are creaking under the strain of outdated, unused legacy applications, developing markets are benefiting from their relatively fresh, young IT landscape. Countries like Finland and Norway report below-average levels of understanding between business and IT (just 64 percent and 69 percent respectively believe the relationship is ‘satisfactory’), an encouraging 92 percent of respondents in Brazil, India, and China report a satisfactory understanding between the two. So supporting old legacy platforms only to keep older non-critical apps running has a multiplier downward effect on productivity and keeps costs artificially high.

The findings of Capgemini’s 2014 Application Landscape Report are based on a survey conducted in 12 languages with 1,116 CIOs and top-level IT decision makers in companies of various sizes from a wide range of industries. With a global emphasis, the report covers 16 countries, with 73 percent of respondents from developed economies (Australia, Europe, USA) and a further 27 percent from fast developing countries (Brazil, China, India).
While Western organizations are creaking under the strain of outdated, unused legacy applications, developing markets are benefiting from their relatively fresh, young IT landscape.

The findings are also derived from work done by Capgemini’s Wide-angle Application Rationalization Program (WARP) CoE. WARP is Capgemini’s framework for application rationalization and IT transformation. The Center of Excellence for WARP, over the past 4 years, has catered to over 150 clients and analyzed more than 30,000 applications, thus providing key industry benchmarks for critical IT metrics.

For more information, see the full Application Landscape Report 2014 along with assets including executive summary, infographic, and videos.

You may also be interested in:

Tuesday, March 4, 2014

Case study: How Dell converts social media analytics into strategic business advantage

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Dell Software.

The next BriefingsDirect business innovation case study examines how Dell has recognized the value of social media for more than improved interactions and brand awareness. Dell has successfully learned from social media how to meaningfully increase business sales and revenue.

The data, digital relationships, and resulting analysis inherent in social media and social networks interactions provide a lasting resource for businesses and their customers, says Dell. And this resource has a growing and lasting impact on many aspects of business -- from research, to product management, to CRM, to helpdesk, and, yes, to sales.

To learn more about how Dell has been making the most of social media for the long haul, BriefingsDirect sat down with Shree Dandekar, Senior Director of Business Intelligence and Analytics at Dell Software. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: Businesses seem to recognize that social media and social-media marketing are important, but they haven’t very easily connected the dots in how to use social media for actual business results. Why?

Dandekar: It’s not that businesses don’t realize the value of social media. In fact, many businesses are looking at simple, social media listening and monitoring tools to start their journey into social media.

Dandekar
The challenge is that when you make these investments into any kind of a listening or monitoring capability, people tend to stop there. It takes them a while to start collecting all the data on LinkedIn, Facebook, or Twitter. It takes some time for them to make some meaningful sense out of that. That’s where the dynamic comes in when you talk to an enterprise business. They’ve really moved on.

So, there are several stages within a social media journey, the very first one being listening and monitoring, where you start capturing and aggregating data.

From there, you start doing some kind of sentiment analysis. You go into some kind of a social-media engagement, which leads to customer care. Then, you go into questions like social return on investment (ROI) and then, try to bring in business data and mash up that together. This is what’s known as Social CRM.

So, if you say that these are the six stages of social-media maturity model or a social-media lifecycle, some of the enterprise businesses have really matured in the first three or four phases, where they have taken social media all the way to customer care. Where they are struggling now is in implementing technologies where you can derive an actual ROI or business value from this data.

Listening and monitoring

Whereas, if you look at some of the small businesses or even mid-sized companies, they have just started getting into listening and monitoring, and the reason is that there are not many tools out there that appeal to them.

I won’t name any specifically, but you know all the big players in the social media listening space. They tend to be expensive and require a lot of reconfiguration and hands-on training. The adoption of social media in the small-sized business or even mid-sized businesses has been slow because these guys don't want to invest in these types of tools.

By the way, here is another big differentiator. If you look at enterprises, they don't shy away from investing in multiple tools, and Dell is a great example. We have a Radian6 deployment, social-media engagement tools, and our own analytic tools that we build on top of that. We tried each and every tool that's out there because we truly believe that we have to gain meaningful insights from social media, and we won't shy away from experimenting with different tools.

Mid-sized companies don't have the budget or resources to try out different tools. They want a single platform that can do multiple things for them – essentially a self-service-enabled social-media intelligence platform.

If I start with listening, I just want to understand who is talking about me, who my influences are, who are my detractors, what are my competitors talking about, and whether or not I can do a quick sentiment analysis. That's where I want to start.

Gardner: Dell has been doing social media since 2006, so going quite a ways back. How important is this to Dell as a company and how important do you think other companies should view this? Is this sort of a one-trick pony, or is there a lasting and expanding value to doing social media interactions analysis?
Dell was built on the value of going direct to consumers and the blog had to communicate and live by those same values.

Dandekar: In addition to leadership from the top, it took a perfect storm to propel us fully into social. In July 2006 when pictures and a report surfaced online out of Osaka, Japan of a Dell laptop spontaneously combusting due to a battery defect (which happened to impact not just Dell, but nearly every laptop manufacturer), it was a viral event of the sort you don’t want. But we posted a blog titled “Flaming Notebook” and included a link to a photo showing our product in flames – which caused some to raise an eyebrow.

I will pause there for a second. How many of you would do that if something similar happened to your business? But Michael Dell made it crystal clear: Dell was built on the value of going direct to consumers and the blog had to communicate and live by those same values.

This is 2006, when the internet and the true value of blogging and everything was just becoming more relevant. That was a turning point in the way we did customer care and the way we engaged with our customers. We realized that people are not only going to call an 800 support number, but are going to be much more vocal about it through sources like social media blogging on Twitter and Facebook.

That's how our journey in social media began and it’s been a multi-year, multi-investment journey. We started looking at simple listening and monitoring. We built a Social Media Command Center. And even before that, we built communities for both our employees and our customers to start interacting with Dell.

Idea Storm

One of the most popular communities that we built was called Idea Storm. This was a community in which we invited our customers to come in and share ideas product improvements they want. This community was formed around 2007. To date, there have been close to 550 different ideas that we got from this community that have been implemented in Dell products.

Similarly, we launched Employee Storm, which was for all the employees at Dell, and the idea was similar. If there are some things in terms of processes or products that can be changed, that was a community for people to come in and share those ideas.

Beyond that, as I said, we built a Social Media Command Center back in 2010. And we also stood up the Social Media and Communities University program. We started training our internal users, our employees, to take on social media.

Dell firmly believes that you need to train employees to make them advocates for your brand instead of shying away and saying, “You know what, I'm scared, because I don't know what this guy is going to be saying about me in the social media sphere.”

Instead, we’re trying to educate them on what is the right channel and how to engage with customers. That's something that Dell has developed over the last six years.
Social media has become a core part of our DNA, and it fits well because of the fact that our DNA has always been built on directly interacting with our customers.

Gardner: You’ve taken a one-way interaction, made it two-way, and then expanded well beyond that. How far and wide do the benefits of social media go? Are you applying this to help desk, research, new products, service and support, or all the above? Is there any part of Dell that doesn't take advantage from social media?

Dandekar: No, social media has become a core part of our DNA, and it fits well because of the fact that our DNA has always been built on directly interacting with our customers. If a customer is going to use social media as one of their primary communication channels, we really need to embrace that channel and make sure we can communicate and talk to our customers that way.

We have a big channel through Salesforce.com where we interact with all the leads that come in through Salesforce.
Taking that relationship to the next level, is there a way I can smartly link the Salesforce leads or opportunities to someone's social profile? Is there a way I can make those connections, and how smartly can I develop some sales analytics around that? That way, I can target the right people for the right opportunities.

Creating linkage

That's one step that Dell has taken compared to some of our industry competitors, to be very proactive in making that linkage. It’s not easy. It requires some investment on your part to take that next step. That's also very close to the sixth stage that I talked about, which is social CRM.

You’ve done a good job at making sure you’re taking all the social media data, massaging it, and deriving insight just from that. Now, how can you bring in business data, mash it up with social data, and then create even powerful insights where you can track leads properly or generate opportunities through Twitter, Facebook or any other social media sources?

Gardner: Shree, it seems to me that what you’re doing is not only providing value to Dell, but there is a value to the buyer as well. I think that as a personal consumer and a business consumer I’d like for the people that I am working with in the supply chain or in a procurement activity to know enough about me that they can tailor the services, gain insight into what my needs are, and therefore better serve me. Is there an added-value to the consumer in doing all this well, too?

Dandekar: The power of social media is real-time. Every time you get a product from Dell and tweet about it or say you like it on Facebook, there is a way that I can, in real-time, get back to that customer and say I heard you and thanks for giving us positive or a negative feedback on this. For me to take that and quickly change a product decision or change a process within Dell is the key.
The power of social media is real-time.

There are several examples. One example that comes to mind is the XPS 13 platform that we launched. The project was called “Project Sputnik.” This was an open-source notebook that we deployed on one of our consumer platforms XPS 13.

We heard a lot of developers saying they like Dell, but really wanted a cool, sexy notebook PC with all the right developer tools deployed on that platform. So, we started this project where we identified all the tools that would resonate with developers, packaged them together, and deployed it on the XPS 13 platform.

From the day when we announced the platform launch, we were tracking the social media channels to see if there was any excitement around this product.

The day we launched the product, within the first three or four hours, we started receiving negative feedback about the product. We were shocked and we didn’t know what was going on.

But then, through the analytics that we have developed on top of our social media infrastructure, we were able to pinpoint that one of the product managers had mistakenly priced the notebook higher than that of a Windows notebook. The price should not have been higher than that of a Windows notebook, and that’s why a lot of developers were angry. They thought that we were trying to price it higher than traditional notebooks.

We were able to pinpoint what the issue was and within 24 hours, we were able to go back to our product and branding managers and talk to them about the pricing issue. They changed the pricing on dell.com and we were able to post a blog on Engadget.

Brand metrics

Then, in real time, we were able to monitor the brand metrics around the product. After that, we saw an immediate uptick in product sentiment. So, the ability to monitor product launches in real time and fix issues in real time, related with product launches, is pretty powerful.

One traditional way you would have done that is something called Net Promoter Score (NPS). We use NPS a lot within Dell. The issue with it is that it is survey-based. You have to send out the survey. You collect all the data. You mine through it and then you generate a score.

That entire process takes 90 to 120 days and, by the time you get it, you might have missed out on a lot of sales. If there was a simple tweak, like pricing, that I could have done overnight, I would have missed out on it by two months.

That’s just an example, where if I had waited for NPS to tell me that pricing was wrong, I would have never reacted in real-time and I would have lost my reputation on that particular product.

Gardner: How extensive is your listening and analysis from social media?

Dandekar: Just to cite some quick stats, Dell has more than 21 million social connections through fans on Facebook, followers on Twitter, Dell community members, and more across the social web.

We talked about customer care and the engagement centers, and I talked about those six stages of the social media journey. Based on the Social Media Command Center that we have deployed within Dell, we also have a social outreach services team that responds to an average of 3,500 posts a week in 14 languages and we have an over 97 percent resolution rate.

We talked about Idea Storm and I had talked about the number of ideas that have been generated out of that. Again, that’s close to 550 plus ideas to date.

Then, we talked about the Social Media and Communities University. That’s an education program that we have put in place, and to date, we have close to 17,000 plus team members who have completed the social media training certification through that program.

Social-media education

By the way, that’s the same module that we have started deploying through our social media professional services offering, where we’ve gone in and instituted the Social Media and Communities University program for our customers as well.

We have had a high success rate just finding some of the customers that have benefited through our social media professional services team and also deploying Social Media Command Center.

Red Cross is a great example where we have gone and deployed the Social Media Command Center for them to be much more proactive in responding to people during the times of calamities.

Clemson University is another example, where we've gone and deployed a Social Media Command Center for them that’s used for alternate academic research methods and innovative learning environments.

Gardner: Tell me a little bit about Dell's SNAP.

Dandekar: SNAP stands for Social Net Advocacy Pulse. This was a product that we developed in-house. As I said, we have been early users of listening and monitoring platforms and we have deployed Social Media Command Centers within Dell.
It takes a long time to get to that ease of use ability for anybody to go in and look at all these social conversations and quickly pinpoint to an issue.

The challenge, as we kept using some of these tools, was that we realized that the sentiment accuracy was really bad. Most of the times when you take a quote and you run it through one of the sentiment analyzers, it pretty much comes back saying it's neutral, when there’s actually a lot of rich context that’s hidden in the quote that was never even looked at.

The other thing was that we were tracking a lot of metrics around graphs and charts and reports, which was important, but we kind of lost the ability to derive actual meaningful insights from that data. We were just getting bogged down by generating these dashboards for senior execs without making a linkage on why something happened and what were some of the key insights that could have been derived from this particular event.

None of these tools are easy to use. Every time I have to generate a report or do something from one of these listening platforms, it requires some amount of training. There is an expectation that the person who is going to do that has been using this tool for some time. It takes a long time to get to that ease of use ability for anybody to go in and look at all these social conversations and quickly pinpoint an issue.

Those are some of the pain points that we realized. We asked, “Is there a way we can change this so we can start deriving meaningful insights? We don’t have to look at each and every quote and say, it's a neutral sentiment. We can actually start deriving some meaningful contact out of these quotes.”

Here is an example. A customer purchased a drive to upgrade a dead drive from a Dell Mini 9 system, which originally came with an 8 GB PCI solid state drive. He took the 16 GB drive and replaced the 8 GB drive that was dead. The BIOS on the system instantly recognized it and booted it just fine. That’s the quote that we got from one of the customer’s feedback.

Distinct clauses

If I had run that quote through one of the regular sentiment analyzing solutions, it would have pretty much said it's neutral, because there was really nothing much that it could get from that it. But if you stop for a second and read through that quote, you realize that, there are a couple of important distinct clauses that can be separated out.

One thing is that he’s talking about a hard drive in the first line. Then, he’s talking about the Dell Mini 9 platform, and then he’s talking about a good experience he had with swapping the hard drive and that the BIOS was able to quickly recognize the drive. That’s a positive sentiment.

Instead of looking at the entire statement and assigning a neutral rating to it, if I can chop it down into meaningful clauses, then I can go back to customer care or my product manager and say, “Out of this, I was able to assign an intensity to the sentiment analysis score.” That makes it even more meaningful to understand what the quote was.

It's not going to be just a neutral or it's not going to be a positive or negative every time you run it through a sentiment analysis engine. That’s just one flavor.

You asked about sentiment gravity. That’s just one step in the right direction, where you take sentiment and assign a degree to it. Is it -2, -5, +5, or +10? The ability to add that extra color is something that we wanted to do on top of our sentiment analysis.
I can really mine that data to understand how I can take that and derive meaningful insights out of that.

Beyond that, what if I could add where the conversation took place. Did it take place on Wall Street Journal or Forbes, versus someone’s personal blog, and then assign it an intensity based on where the conversation happened?

The fourth area that we wanted to add to that was author credibility. Who talked about it? Was it a person who is a named reputed person in that area, or was it an angry off customer who just had a bad experience. Based on that, I can rate and rank it based on author credibility.

The fifth one we added was relevance. When did this event actually happen? If this event happened a year or two back, or even six months back, and someone just wants to cite it as an example, then, I really don’t want to give it that high rating. I might change the sentiment to reflect that it's not that relevant based on today’s conversations.

If I take some of these attributes, sentiment, degree of sentiment, where the conversation happened, who talked about it and when and why did that conversation happen and then convert that into a sentiment score, that’s now a very powerful mechanism for me to calculate sentiment on all these conversations that are happening.

That gives me meaningful insights in terms of context. I can really mine that data to understand how I can take that and derive meaningful insights out of that. That’s what SNAP does, not just score a particular quote by pure sentiment, but add these other flavors on top of that to make it much more meaningful.

Make it usable

Gardner: Have you considered productizing this and perhaps creating a service for the smaller companies that want to do this sort of social analysis and help them along the way?
We also want to make sure we’re bringing tools to market to service those mid-market companies as well.

Dandekar: We’re still working through those details and figuring out as we always do the best ways to bring solutions to market, but for us, mid-market is our forte. That’s an area where Dell has really excelled. For us to be in the forefront of enterprise social media is great, but we also want to make sure we’re bringing tools to market to service those mid-market companies as well.

By the way, we have stood up several solutions for our customers. One of them is the Social Media Command Center. We’ve also stood up social media professional services and we offer consulting services even to small- and mid-sized companies on how to mature in a social media maturity cycle. We are also looking at bringing SNAP to market. But if you’re talking about specific software solutions, that’s an area that we’re certainly looking into, and I would just say, “Stay tuned.”

Gardner: We’ll certainly look for more information along those lines. It's something that makes a lot of sense to me. Looking to the future, how will social become even more impactful?

People are increasing the types of activities they do on their mobile devices and that includes work and home or personal use and a combination of them, simultaneous perhaps. They look to more cloud models for how they access services, even hybrid clouds. It’s stretching across your company’s on-premises activities and more public cloud or managed service provider hosted services.

We expect more machine-to-machine data and activities to become relevant. Social becomes really more of a fire hose of data from devices, location, cloud, and an ever-broadening variety of devices. Maybe the word social is outdated. Maybe we’re just talking about data in general?

How do you see the future shaping up, and how do we consider managing the scale of what we should expect as this fire hose grows in size and in importance?

Embarking on the journey

Dandekar: This is a great question and I like the way you went on to say that we shouldn’t worry about the word social. We should worry about the plethora of sources that are generating data. It can be Facebook, LinkedIn, or a machine sensor, and this fits into the bigger picture of what's going to be your business analytics strategy going forward.

Since we’re talking about this in the context of social, a lot of companies that we talk to -- it can be an enterprise-size company or a mid-market-size company -- most of the time, what we end up seeing is that people want to do social media analytics or they want to invest in the social media space. Some of their competitors are doing that, and they really don’t know what to expect when they embark on this journey.

A lot of companies have already gone through that transformation, but many companies are still stuck in asking, “Why do I need to adopt social media data as part of my enterprise data management architecture?”

Once you cross that chasm, that’s where you actually start getting into some meaningful data analytics. It's going to take a couple of years for most of the businesses to realize that and start making their investments in the right direction.
It's going to take a couple of years for most of the businesses to realize that and start making their investments in the right direction.

But coming back to your question on what's the bigger picture, I think it’s business analytics. The moment you bring in social media data, device data, the logs, sources like Salesforce, NetSuite -- all this data together now presents the unified picture using all the datasets that were out there.

And these datasets can also be datasets like something from Dun and Bradstreet, which has a bunch of data on leads or sales, mixing that data with something like Salesforce data and then bringing in social media data. If I can take those three datasets and convert that into a powerful sales analytics dashboard, I think that’s the nirvana of business analytics. We’re not there yet, but I do feel a lot of industry momentum going in that direction.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Dell Software.

You may also be interested in:

Thursday, February 20, 2014

Istanbul-based Finansbank manages risk and security using HP ArcSight, Server Automation

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Governance, risk management and compliance (GRC) form a top-tier of requirements for banks anywhere in the world as they create and deploy applications. A close second nowadays is speed to market, and rapid responsiveness to changing customer expectations and demands.

So when Finansbank, an Istanbul-based bank, knew they had to better manage risk -- but not lose time-to-market advantages -- they did a thorough analysis of available IT products and services. The result was an impressive record of managed risk and deployments, with an eye to greater automation over time.

BriefingsDirect had an opportunity to learn first-hand at the recent HP Discover 2013 Conference in Barcelona how Finansbank extended its GRC prowess -- while smoothing operational integrity and automating speed to deployment -- using several HP solutions.

Learn how from a chat with Ugur Yayvak, Senior Designer of Infrastructure at Finansbank in Istanbul. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: Tell us a bit about your organization and how you're keeping compliance and risk issues in check?

Yayvak
Yayvak: Finansbank is one of the largest banks in Turkey and it has more than 12,000 employees and 600 branches in the country. Banking is a competitive world in Turkey, and for compliance we have to be rapid. We have to do things faster. And security is a big deal for us.

Because we’re a bank, we need to obey the payment-card industry (PCI) and Sarbanes-Oxley (SOX) rules. To accomplish this, we had to create some scripts to check the data on our servers. It takes lots of time to do compliance reporting. Security is a must for the servers, because of attacks. We need to be compliant and secure, and we need to move fast.
 Gardner: And so as you began to look for solutions to these problems, how did you come up with a solution?

Compliance and integrity

Yayvak: First of all, we needed a compliance and integrity-check solution. We did a proof of concept (POC) with three different vendors and we checked for performance, compliance, tool support, ease of use, reporting tools, and the support that the vendor would give us. After all that, we chose HP Server Automation.

We’ve been using it for six months. Three months was for the implementation process, but during implementation, we created our first rules. We did some basic agent rollouts on the servers. Now, we have 90 percent coverage on all of our UNIX servers on the Server Automation site.
We’re also using Service Management and the ArcSight tool. We integrated Server Automation with the Service Management, ArcSight, and also Operations Orchestration to do our jobs in less time.
Gardner: What have been some of the results? What have you been gaining in terms of better control?
With the help of the Server Automation, it’s very simple and we can get the results in much less  time.

Yayvak: We’re creating monthly reports for our audit teams, and it takes less time. With the help of Server Automation, we’ve scheduled our jobs and the audit rules and reports that we want to share with our audit teams.

It takes much less time than it did before. Also, with the help of the scripts, the daily system administration tasks are very easy. Previously, we were doing everything by hand. With the help of the Server Automation, it’s very simple and we can get the results in much less  time.

Looking to the future

Gardner: What about the future? Do you have plans to move further, perhaps using ArcSight? Are there other security benefits that you have in mind?

Yayvak: One is to improve audit server automation, because there are some scripts that we’ve changed. Those changes that we’ve done on the servers must be audited. We also want to integrate Server Automation with ArcSight to track the changes that we’ve made. And if we’ve made an error, we will be alerted by the ArcSight server.

Right now, we’re using these solutions across our central data center, and also the disaster recovery site. But maybe later on, we can implement this for the branches to take care of the data servers there.
Gardner: What announcements or advances in the recent HP products capture your interest?

Yayvak: The new version of Server Automation came out this year, and we wanted to know what has changed. Also Finansbank will use lots of HP's products like Service Manager, Orchestration Manager, Operations Manager. This event was a good place to learn what has changed across these services.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: