Friday, November 14, 2014

HP Analytics blazes new trails in examining business trends from myriad data

The next BriefingsDirect deep-dive big data thought leadership interview examines how HP analyzes its own vast data warehouses to derive new insights for its global operations, extensive supply chain, sales organization, global marketing groups, and customers.

We'll explore how the Analytics Group at HP, based in India, sifts through myriad internal data sources, as well as joins with other public data sets, to deliver entirely new intelligence value that helps make business more responsive and efficient.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn how, BriefingsDirect sat down with Pramod Singh, Director of Digital and Big Data Analytics at HP Analytics in Bangalore, India, at the recent HP Big Data 2014 Conference in Boston. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us a little bit about the Analytics Group at HP, what you do, and what’s the charter of your organization.

Singh: We have a big analytics organization in HP, it’s called Global Analytics and serves the analytics for most of HP. About 80 to 90 percent of the analytics happening inside of HP comes out of this eco-system. We do analytics across the entire food chain at HP, which includes the supply chains, marketing, and sales.

What I personally lead is an organization called Digital Analytics, and we are responsible for doing analytics across all digital properties for HP. That includes the eCommerce, social media, search, and campaign analytics. Additionally, we also have a Center of Excellence for Big Data Analytics, where we're using HP’s big-data technologies, which is that framework called HAVEn, to help develop big-data solutions for HP customers, as well as internal HP.
Fully experience the HP Vertica analytics platform...
Become a member of myVertica
Gardner: Obviously, HP is a very large global company. What sort of datasets are we talking about here? What’s the volume that you're working with?

Data explosion

Singh: As you know, a data explosion is happening. On one end, HP has done a very good job over the last six to seven years of getting most of their enterprise data into something called an enterprise data warehouse. We're talking about close to two petabytes of data, which is structured data.

Singh
The great part of this journey is that we have taken data from 700-800 different data marts into one enterprise data warehouse over the last three to four years. A lot of data that is not part of the enterprise is also becoming an important part of making the business decisions.

A lot of that data I personally deal with in the digital space, is what we call the human-generated data, the social media data, which no enterprise owns. It’s open for anybody to go use that. What I've started to see is that, on one hand, we've done a really good job of getting data in the enterprise and getting value out of it.

We've also started to analyze and harvest the data that is out in the open space. It could be blogs, Twitter feeds, or Facebook data. Combining that is what’s bringing real business value.

The Global Analytics organization is more than 1,000 people spread through different parts of the world. A big chunk of that is in Bangalore, India, but we have folks in the US and the UK. We have a center in Guadalajara, Mexico and couple of other locations in India. My particular organization is close to 100 people.
We've also started to analyze and harvest the data that is out in the open space. It could be blogs, Twitter feeds, or Facebook data. Combining that is what’s bringing real business value.

I have a PhD in pure mathematics, and before that I had an MBA in marketing. It's a little bit of an awkward mix there, and got in into analytics space in mid '90s working for Walmart.
I built out Walmart’s Assortment Planning System in late '90s and then came to HP in 2000 leading an advance data-mining center in Austin, Texas. From there I evolved into doing e-business analytics for few years and then moved to customer knowledge management. I spent five years in IT developing analytics platform.

About year-and-a-half ago, I got an opportunity to lead the big-data practice for this organization called Global Analytics. In five years, they had gone from five people to more than 1,000 people, and that intrigued me a lot. I was able to take the opportunity and move to India to go lead that team.

More insights

Gardner: Pramod, when we look back into this data, do you gain more insights knowing what you're looking for, or not knowing what you're looking for? What kind of insights were the unexpected consequences of your putting together this type of data infrastructure and then applying big-data analytics to it?

Singh: We deal with that day-in and day-out. I’ll give you a couple of examples there. This is something that happened about three or four years ago with HP. We were looking at a problem that was a classic problem in marketing to the US small and medium-sized business organizations (SMBs). We had a fixed budget for marketing, and across the US, there are more than 20 million SMBs. The classic definition of an SMBs is any business with 100-500 employees.

HP had an install base of a small part of that. We realized that particular segment of a SMBs is squeezed between a classic consumer, where you can do mass marketing, such as TV advertising, and an enterprise, where you can actually put bodies, your people who have relationship. SMBs are squeezed in between those two extremes.
The question then became what do we do with that? Again, when you do data mining and analytics, you may not know where this will lead you.

On one hand, you can't reach out to every single one of them. It’s just way too expensive to do that. On the other hand, if you try to go do the marketing, you don’t get the best out of it.

We were starting to work on something like that. I was approached by a vice president in marketing who said revenues are declining and they had a limited marketing budget. They didn’t know what to do.

This is where one of those unexpected things came in. I said, "Let’s see in that install base whether there are different segments of customers that are behaving differently." That led us on kind of a journey where we said, well, "How do we start to do that right? Let’s figure out what are the different attributes of data that I can capture."

On one hand, if you look at SMBs, you can capture who they are, what industry segment they're in, how many employees they have, where are they based, who the CEO is. It's what we call firmographics.

On the other hand, you have classes of data involving their interaction with HP. It could be things like how many PCs or servers they bought, how long ago did they buy it, how much money they spent, the whole transactional aspect of it.

Then, there are some things that are derived attributes. You may be able to derive that in the last one year they came to us four times. What interaction did we have on the website,? For example, did they come to us through a web channel? If they did, how many email offers were sent to them? How many of those were clicked? How many of those converted? Those are the classes of data that we could capture.

The question then became what do we do with that? Again, when you do data mining and analytics, you may not know where this will lead you.

Mathematical modeling

We thought that maybe there are different classes of customers. We pulled our data together and started to do mathematical modeling. There are techniques called clustering, analytical techniques called K-Means, and things like that. We started to get some results and to analyze them. In this type of situation, we have to be careful, because there are some things that may look mathematically correct, but may not have a real business value behind it.

Once we started to look at those things, we went through multiple iterations. We realized that we were not getting segments or clusters that were very distinct. One day, I was driving home in Austin, and I said, "You know what? Who they are I don’t control, but as far as what they're doing with HP we have a reasonably good understanding."

So we started to do clustering based only on those attributes, and that’s where an "aha" moment came. We started to find these clusters, which we call segments, where we eventually found a cluster which was that 7 to 8 percent of the population that brought in 45 percent of revenue.

The marketers started to say that this was a gold mine. That’s what we never expected to happen. We put together a structure. Once we figured out these four or five clusters, we tried to figure out why they were clustered together. What’s common?
Fully experience the HP Vertica analytics platform...
Become a member of myVertica
We built out a primary research thing, where we took a random sample out of each one of those clusters, interviewed those guys, and were able to build a very good profile of what these segments were.

There are 20 million SMBs in US, and we are able to build a model to predict which of these prospects are similar to the clusters we had. That’s where we were able to find customers that looked like our most profitable customers, which we ended up calling Vanguards. That resulted into a tremendous amount of  a dollar increment for HP. It's a good example of what you talked when you find unexpected things.

We just wanted to analyze data. It led us to a journey and ended up finding a customer group we weren't even aware of. Then, we could build marketing strategy to actually go target those and get some value out of it.

Gardner: At the Big Data Conference, I've spoken to other organizations who are creating an analytics capability and then exposing that to as many of their employees as possible, hoping for this very sort of unexpected positive benefit. Is there a way that you're taking your analytics either through visualization or tools and then allowing a larger population within HP to experiment with it?

Singh: We're trying to democratize the analytics as much as we can. One thing we're realizing is that to get the full value, you don't want data to stay in silos. So there are a couple of things you have to do. In terms of building out an ecosystem where you have good set of motivated people and where you can give them a career path, we have created this organization called Global Analytics. You get a critical mass of people who challenge each other, learn from each other, and do lot of analytics.

But also it’s very important that on the consumption side of it, you have people who are analysts and understand analytics and get the best value out of it. So they try to create that ecosystem. We have seen both ends of it.

Good career path

If you just give them to one data miner or analytics person in one team, sometimes the person does not find an ecosystem to challenge himself or herself. We're trying to do it on both sides of the fence, so that we can provide people with a good career path.

Hiring these folks is not easy. Once you've hired them, retaining them is not easy. You want to make sure to create an ecosystem where it’s challenging enough for these people to work. It also has to be an ecosystem where you continually challenge them and keep training them.

The analytical techniques are evolving. When I started doing it, things were stable for years. Now, the newer class of data is coming in, newer techniques are coming in, and newer classes of business problems are coming in. It’s very important that we keep the ecosystem going. So we try to do it on both sides.

Gardner: Very interesting. HP, of course, has its own line of products for big-data analysis. You're such a large global enterprise that you're doing lots of analysis, as any good business should, but you're also being asked to show how this works. Are there some specific use cases that demonstrate for other enterprises what you've learned yourselves.
You want to make sure to create an ecosystem where it’s challenging enough for these people to work. It also has to be an ecosystem where you continually challenge them and keep training them.

Singh: There are several that we can talk about. One is in a social media space. I briefly talked about that. My career evolved of doing analytics in what I call "data inside the enterprise." But, over the last couple of years, we started to go look at data outside the enterprise.

Recently we went and looked at a bank. We were able to harvest data from the Internet, publicly available data like Glassdoor, for example. Glassdoor is a website where employees of a company can put their feedback, talk about the company, and rate things.

We were presenting it to the executives of this particular bank and we were able to get all the data and tell them the overall employee morale. We figured out that the life-work balance for the employees wasn't very good.

The main component that the employees weren't happy about was their leave policy and their vacation policy. We drilled down and figured out that the bankers seemed to be fairly happy, but the IT guys and analysts weren't very happy. Again, this is one example where we didn't ask for a line of data from the customer. This data is publicly available. You and I, or anybody else, can go get it. I can do that same analysis for HP or any other company.

That’s where I believe the classes of analytics we're doing is changing. A lot of times, your competitive differentiator is the ability to do things with that data. Data is a corporate asset and it will be, but this class of what we call the user-generated data is changing analytics as a whole. The ability to go harvest it and, more importantly, get value out of it will be the competitive differentiator.

Gardner: Any other use cases that demonstrate the power of a particular type of platform, let’s say Vertica in HAVEn, where you've got the power of a columnar architecture and you've got the ability to bring in unstructured data from Autonomy? Maybe there are a couple of use cases that demonstrate the unique attributes of HAVEn when it comes to inclusivity and the comprehensive nature of information today?

Game changer

Singh: Let me talk about a couple of the things that happened in the HAVEn ecosystem. One of the main work forces in HAVEn is our massively parallel database called Vertica. In addition to being a database where we can ingest data very quickly, ingest large volumes of data, and run query performance, the game-changer for us as an analytics practitioner for me has been ability to do analytics in database.

If I look at my career over the last 20-22 years, most of the times what happens in the analytics space is that you have data residing in a database or an enterprise data warehouse. When you want to build a model, you take the data out and use an analytics platform like SAS, R, or SPSS. You do something there and you either bring the data back into the environment or you run the models and publish them out.

What Vertica has done that's unique is given us a framework, and through the UDEF framework, we could build a data mining model and run it directly on a database engine and take the output out.

An example we took to HP Discover a couple of months ago was trying to predict a failure of a machine before the actual failure happens. HP has these big machines and big printers, which are very expensive.

Like lot of high-end devices these days, they send out a lot of data. They send out data about when you're using a machine. The sensors send out a lot of information, maybe the pressure of the valves, the kind of the temperature they're in, the kind of throughput they're giving you, or the number of pages you've printed.
Looking at each components of failure, we could predict with a certain probability when the machine will fail and with a certain probability.

Also, they give you data on the events when the machine was not performing optimally or actually failed. We were able to go ingest all that data, put the data onto in the Vertica  platform, and build predictive models using open source R language. We built a model that can predict the failure of a machine.

Looking at each components of failure, we could predict with a certain probability when the machine will fail and with a certain probability, so our service reps can actually be proactive and not wait for the machine to fail. That's one example of doing an in-database data mining using Vertica.

Another example used more components around the social-media space. One of the problems in the social-media space, and I think you guys are probably familiar with this, is finding influencers.

I gave a talk yesterday around figuring out how you do that. There are classical ways if you go by the uni-dimensional thing around the number of followers or retweets you have. Barack Obama or Lady Gaga would be big influencers, but Barack Obama, for cloud computing for HP, may not be a very big influencer.

So you build those classes of algorithms. My team has actually built out three patented algorithms to figure out how to identify influencers in the space. We've actually built out a framework where we can source that data from the social-media space, drop it into a Hadoop kind of an environment.
Fully experience the HP Vertica analytics platform...
Become a member of myVertica
We use Autonomy to enrich and put some sentiments to it and then drop the data into the Vertica environment. In that Vertica environment, you run the compressed algorithms and get an output. Then, you can score and predict who is the influencer for the topic you are looking for.

Influencers

I gave the example of Barack Obama, in general a big influencer, but he is not influencer for all topics. Maybe in politics or the US government he's a big influencer, but not for cloud computing. Influencer is also a function of time. Somebody like Diego Maradona probably was a big influencer in soccer in the ’90s, but in 2014, not that much.

You have to make sure that you can incorporate those as part of the logic of your algorithm. We've been able to use the multiple components of HAVEn and build out a complete framework where we can tell numerically who the main influencers are and how influential they are. For example, if you get a score of 93 and I get a score of 22, you are almost four times as influential as I am.

Gardner: For other organizations that are interested in learning more about how HP Analytics is operating and maybe learning from your example, are there any resources or websites we can go to, where you are providing more information about HP Analytics?
You have to make sure that you can incorporate those as part of the logic of your algorithm.

Singh: Definitely. We work through our partners in Enterprise Services. We have our own website as well. There are multiple ways that you can approach us. You can talk to the Vertica sales team and they can connect to us. As I said, we do analytics for all of HP and for select customers. We do not have a direct sales arm to us. We work through our partners in Enterprise Services, as well as with software team.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, November 11, 2014

Vichara Technologies grows the market for advanced analytics after cutting its big data teeth on Wall Street

The next BriefingsDirect deep-dive big data benefits case study interview explores how Vichara Technologies in Hoboken, New Jersey is expanding its capabilities in big data from origins on Wall Street into other areas, and thereby demonstrating the growing marketplace for advanced big-data analytics services.

The use of HP Vertica as a big data core component to Vichara has allowed them to extend their easier to use financial modeling and tools, and then apply them to other industries such as insurance and healthcare.

 Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how advanced big data, cloud, and converged infrastructure implementations are expanding the impact and value of rapid and increasingly predictive analytics, BriefingsDirect sat down with Tim Meyer, Managing Director at Vichara Technologies at the recent HP Big Data 2014 Conference in Boston. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us how your organization evolved, and how big data has become such a large part of the marketplace for gaining insights into businesses.

Meyer: The company has its roots in analytics and risk modeling and for all sorts of instruments that are used on Wall Street for predicting prices and valuation of instruments. As the IT infrastructure grew from Excel to databases and eventually to very fast databases, such as Vertica, we realized that there were many problems that couldn't be solved before, and that required way too long a time to answer.

Meyer
Wall Street people measure time in seconds, not in hours. We've found that there's a great value in answering a lot of business intelligence (BI) questions -- especially around valuations and risk models, as well as portfolio management. These are very large portfolios and datasets that have to be analyzed. We think that this is a great use of big-data analytics.

Gardner: How long have you been using Vertica? How did it become a part of your portfolio of services?
Fully experience the HP Vertica analytics platform...
Become a member of myVertica.
Meyer: We've been using Vertica for at least for two years now. It’s one of the early ones, and we recognized it as being one of the very fastest databases. We try to use as many of these components as possible. We really like Vertica for its capabilities.

Risk assessment

Gardner: Tim, this whole notion of risk assessment is of interest to me. I think it's coming to bear on more industries. People are also interested in extending from knowing what has happened to being able to predict, and then better prescribe new efforts and new insights.

Tell me about predictive risk assessment. How do you go about that, and what should other companies understand about that?

Meyer: Risk assessment comes about from starting to look at how prices fluctuate and how interest rates move, and thus create changes in derivatives. What has happened most recently is that a lot of the banks and hedge funds have recognized this. Not only is [predictive risk assessment] a business imperative for them to have that half-percent hedge, but there are also compliance reasons for which they need to predict what their business is going to look like.

There are now more and more demands on stress testing, as well as demands from international banking regulations, such as Basel III, that require that businesses such as hedge funds and banks not just look behind, but ahead at how their business is going to look in a year. So this becomes really very important for a host of reasons even more than just how your business is doing.

Gardner: If I were a business and wanted to start taking advantage of what's now available through big-data analytics -- and at a more compelling price and higher performance than in the past -- what are some of the first steps?
Fully experience the HP Vertica analytics platform...
Become a member of myVertica
Do I need to think about the type of data or the type of risk? How do you go about of recognizing that you can now get the technology to do this at an analytics level, but there is still the needed understanding of how to do it at the process and methodological level?

Meyer: We work very closely with our customers and try to separate algorithmic work from the development work. A lot of our customers have more than a few Caltech and MIT PhDs who do the algorithmic definitions. But all of them still need the engine, the machine with its scripting, and fast capability to build those queries right into the system as quickly as possible.

We usually work with these kinds of people, and it is a bit of a team-work effort. We find that that’s a way to figure out what is our value, and what is the value of our customer. Together, it has turned out to be very good teamwork.

Gardner: And you are a consultancy, as well as a services provider? Do you extend into any hosting or do you have a cloud approach? How do you manage the technology for the consulting and services you offer?

Broader questions

Meyer: We expand from the core products and tools into broader questions for people who want a proof of concept (POC) into this new technology. We build those on an ongoing basis. People, as well, want to look at options such as different performances of clouds. They do vary.

So we take on those kinds of consulting work as well, not to mention that sometimes it expands into back-office compliance and sometimes into billing issues. They all relate to the core business of managing portfolios, but yet they are linked.

Very often, we've done those kinds of projects and we see even more of these possibilities as we see compliance as a bigger issue, such as Dodd-Frank as well as Basel III, in the financial world. But they are really no different than many regulations coming on the healthcare side for paperwork management, for example.

Gardner: So that raises the question of the verticals that you expect first. Where is predictive risk assessment and the analytics requirements for that likely to appear first?
They all relate to the core business of managing portfolios, but yet they are linked

Meyer: One thing we have learned from our experience in financial modeling and tools is that there is always a need for people who are totally unskilled in SQL or other query languages to quickly get answers. Although many people have different takes on this, we think we've found some tools that are unique. And we think that these tools will apply to other industries, most particularly to healthcare.

These are big problems, but we think the way we think of it is to start small with a POC or really defining a very small problem and solving it and not trying to take a bite of the entire elephant, so to speak. We find that to be a much better approach to going into new segments and we'll be looking at both insurance and healthcare as two examples.
Fully experience the HP Vertica analytics platform...
Become a member of myVertica
Gardner: Back to the technology front. Are there any developments in the technology arena that give you more confidence that you can take on any number of data types, information types, and scale and velocity types?
I'm thinking of looking at either cloud or converged infrastructure support of in-memory or columnar architectures. Is there a sense of confidence that no matter what you go to bite off in the market, you have the technology, and the technology partner, to back you up?

Meyer: We're finding that there is much more maturity in a lot of database technologies that are now coming out.

There is always something new on the horizon, but there are, as you said, columnar architectures and so on. These are already here, and we're constantly experimenting with them.

To your point about cloud infrastructure and where that is going, it's the same thing. We see ParAccel, Amazon, and data warehouses such as Redshift showing us the way where a lot of the technology is becoming very prepackaged. The value-add is to talk to the customer and speed up that process of integration.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, October 29, 2014

Five ways to make identity management work best across hybrid computing environments

Any modern business has been dealing with identity and access management (IAM) from day one. But now, with more critical elements of business extending beyond the enterprise, access control complexity has been ramping up due to cloud, mobile, bring your own device (BYOD), and hybrid computing.

And such greater complexity forms a major deterrent to secure, governed, and managed control over who and what can access your data and services -- and under what circumstances. The next BriefingsDirect thought leader discussion then centers on learning new best practices for managing the rapidly changing needs around IAM.

While cloud computing gets a lot of attention, those of us working with enterprises daily know that the vast majority of businesses are, and will remain, IT hybrids, a changing mixture of software as a service (SaaS), cloud, mobile, managed hosting models, and of course, on-premises IT systems.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We're here with a Chief Technology Officer for a top IAM technology provider to gain a deeper understanding of the various ways to best deploy and control access management in this ongoing age of hybrid business.

Here to explore five critical tenets of best managing the rapidly changing needs around identity and access management is Darran Rolls, Chief Technology Officer at SailPoint Technologies in Austin, Texas. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There must be some basic, bedrock principles that we can look to that will guide us as we're trying to better manage access and identity.

Rolls: Absolutely, there are, and I think that will be a consistent topic of our conversation today. It's something that we like to think of as the core tenets of IAM. As you very eloquently pointed out in your introduction, this isn't anything new. We've been struggling with managing identity and security for some time. The changing IT environment is introducing new challenges, but the underlying principles of what we're trying to achieve have remained the same. 

http://www.sailpoint.com/about-us/executive-team/darran-rolls
Rolls
The idea of holistic management for identity is key. There's no question about that, and something that we'll come back to is this idea of the weakest link -- a very commonly understood security principle. As our environment expands with cloud, mobile, on-prem, and managed hosting, the idea of a weak point in any part of that environment is obviously a strategic flaw.

As we like to say at SailPoint, it’s an anywhere identify principle. That means all people -- employees, contractors, partners, customers, basically from any device, whether you’re on a desktop, cloud, or mobile to anywhere. That includes on-prem enterprise apps, SaaS apps, and mobile. It’s certainly our belief that for any IAM technology to be truly effective, it has to span all for all -- all access, all accounts, and all users; wherever they live in that hybrid runtime.

Gardner: So we're in an environment now where we have to maintain those bedrock principles for true enterprise-caliber governance, security, and control, but we have a lot more moving parts. And we have a cavalcade of additional things you need to support, which to me, almost begs for those weak links to crop up.

So how do you combine the two? How do you justify and reconcile these two realities -- secure and complex?

Addressing the challenge

Rolls: One way comes from how you address the problem and the challenge. Quite often, I'm asked if there's a compromise here. If I move my IAM to the cloud, will I still be able to sustain my controls and management and do risk mitigation, which is what we were trying to get to.

My advice is if you're looking at an identity-as-a-service (IDaaS) solution that doesn’t operate in terms of sustainable controls and risk mitigation, then stop, because controls and risk mitigation really are the core tenets of identity management. It’s really important to start a conversation around IDaaS by quite clearly understanding what identity governance really is.

This isn’t an occasional, office-use application. This is critical security infrastructure. We very much have to remember that identity sits at the center of that security-management lifecycle, and at the center of the users’ experience. So it’s super important that we get it right.

So in this respect, I like to think that IDaaS is more of a deployment option than any form of a compromise. There are a minimum set of table stakes that have to be in place. And, whether you're choosing to deploy an IDaaS solution or an on-prem offering, there should be no compromise in it.

We have to respect the principles of global visibility and control, of consistency, and of user experience. Those things remain true for cloud and on-prem, so the song remains the same, so to speak. The IT environment has changed, and the IAM solutions are changing, but the principles remain the same.

Gardner: I was speaking with some folks leading up to the recent Cloud Identity Summit, and more and more, people seem to be thinking that the IAM is the true extended enterprise management. It's more than just the identity in access, but across services and so essential for extended enterprise processes.
Being more inclusive means that you need to have the best of all worlds. You need to be able to be doing well on-premises as well as in the cloud, and not either/or.

Also, to your point, being more inclusive means that you need to have the best of all worlds. You need to be able to be doing IAM well on-premises, as well as in the cloud -- and not either/or.

Rolls: Most of the organizations that I speak to these days are trying to manage a balance between being enterprise-ready -- so supporting controls and automation and access management for all applications, while being very forward looking, so also deploying that solution from the cloud for cost and agility reasons. 

For these organizations, choosing an IDaaS solution is not a compromise in risk mitigation, it’s a conscious direction toward a more off-the-shelf approach to managing identity. Look, everyone has to address security and user access controls, and making a choice to do that as a service can’t compromise your position on controls and risk mitigation.

Gardner: I suppose the risk of going hybrid is that if you have somewhat of a distributed approach to your IAM capabilities, you'll lose that all-important single view of management. I'd like to hear more, as we get into these tenets, of how you can maintain that common control.

You have put in some serious thought into making a logical set of five tenets that help people understand and deal with these changeable markets. So let’s start going through those. Tell me about the first tenet, and then we can dive in and maybe even hear an example of where someone has done this right.

Focusing on identity

Rolls: Obviously it would be easy to draw 10 or 20, but we like to try and compress it. So there's probably always the potential for more. I wouldn’t necessarily say these are in any specific order, but the first one is the idea of focusing on the identity and not the account.

This one is pretty simple. Identities are people, not accounts in an on-line system. And something we learned early in the evolution of IAM was that in order to gain control, you have to understand the relationships between people -- identities, and their accounts, and between those accounts and the entitlements and data they give access, too.

So this tenet really sits at the heart of the IAM value proposition -- it's all about understanding who has access to what, and what it really means to have that access. By focusing on the identity -- and capturing all of the relationships it has to accounts, to systems, and to data -- that helps map out the user security landscape and get a complete picture of how things are configured.


Gardner: If I understand this correctly, all of us now have multiple accounts. Some of them overlap. Some of them are private. Some of them are more business-centric. As we get into the Internet of Things, we're going to have another end-point tier associated with a user, or an identity, and that might be sensors or machines. So it’s important to maintain the identity focus, rather than the account focus. Did I get that right?

Rolls: We see this today in classic on-prem infrastructure with system-shared and -privileged accounts. They are accounts that are operated by the system and not necessarily by an individual. What we advocate here, and what leads into the second tenet as well, is this idea of visibility. You have to have ownership and responsibility. You assign and align the system and functional accounts with people that can have responsibility.
The consequences of not understanding and accurately managing those identity and account relationships can be pretty significant.

In the Internet of Things, I would by no means say that it's nothing new, because if nothing else, it's potentially a new order of scale. But it's functionally the same thing: Understanding the relationships.

For example, I want to tie my Nest account back to myself or to some other individual, and I want to understand what it means to have that ownership. It really is just more of the same, and those principles that we have learned in enterprise IAM are going to play out big time when everything has an identity in the Internet of Things.

Gardner: Any quick examples of tenet one, where we can identify that we're having that focus on the user, rather than the account, and it has benefited them?

Rolls: For sure. The consequences of not understanding and accurately managing those identity and account relationships can be pretty significant. Unused and untracked accounts, something that we commonly refer to in the industry as "orphan accounts," often lead to security breaches. That’s why, if you look at the average identity audit practice, it’s very focused on controls for those orphan accounts.

We also know for a fact, based on network forensic analysis that happens post-breach, that in many of the high-profile, large-scale security breaches that we've seen over the last two to five years, the back door is left open by an account that nobody owns or manages. It’s just there. And if you go over to the dark side and look at how the bad guys construct vulnerabilities, first things they look for are these unmanaged accounts.

So it’s low-hanging fruit for IAM to better manage these accounts because the consequences can be fairly significant.

Tenet two

Gardner: Okay, tenet two. What’s next on your priority list?

Rolls: The next is two-fold. Visibility is king, and silos are bad. This is really two thoughts that are closely related.

The first part is the idea that visibility is king, and this comes from the realization that you have to be able to capture, model, and visualize identity data before you have any chance of managing it. It’s like the old saying that you can’t manage what you can’t measure.

It’s same thing for identity. You can’t manage the access and security you don’t see, and what you don’t see is often what bites you. So this tenet is the idea that your IAM system absolutely must support this idea of rapid, read-only aggregation of account and entitlement information as a first step, so you can understanding the landscape.

The second part is around the idea that silos of identity management can be really, really bad. A silo here is a standalone IAM application or what one might think of as a domain-specific IAM solution. These are things like an IDaaS offering that only does cloud apps or an Active Directory-only management solution, basically any IAM tool that creates a silo of process and data. This isolation goes against the idea of visibility and control that we just covered in the first tenant.
In education, we say "no child left behind." In identity, we say “no account left behind, and no system left behind.”

You can’t see the data if its hidden in a siloed system. It’s isolated and doesn't give you the global view you need to manage all identity for all users. As a vendor, we see some real-world examples of this. SailPoint just replaced a legacy-provisioning solution at a large US based bank, for example, because the old system was only touching 12 of their core systems.

The legacy IAM system the bank had was a silo managing just the Unix farm. It wasn't integrated and its data and use case wasn’t shared. The customer needed a single place for their users to go to get access, and a single point of password control for their on-prem Unix farm, and for their cloud-based, front-end application. So today SailPoint’s IdentityNow provides that single view for them, and things are working much better.

Gardner: It also reminds me that we need to be conscious of supporting the legacy in the older systems, recognizing that they weren't designed necessarily for the reality we're in now. We also need to be flexible in the sense of being future-proof. So it's having visibility across your models that are shifting in terms of hybrid and cloud, but also visibility across the other application sets and platforms that were never created with this mixture of models that we are now supporting.

Rolls: Exactly right. In education, we say "no child left behind." In identity, we say “no account left behind, and no system left behind.” We also shouldn’t forget there is a cost associated with maintaining those siloed IAM tools, too. If the system only supports cloud, or only supports on-prem, or managing identity for mobile, SaaS, or just one area of the enterprise -- there’s cost. There's a real dollar cost for buying and maintaining the software, and probably more importantly, a soft cost in the end-user experience for the people that have to manage across those silos. So these IAM silos are not only preventing visibility and controls, but there is big cost here, a real dollar cost to the business, as well.

Gardner: This gets closer to the idea of a common comprehensive view of all the data and all the different elements of what we are trying to manage. I think that's also important.

Okay, number three. What are we looking at for your next tenet, and what are the ways that we can prevent any of that downside from it?

Complete lifecycle

Rolls: This tenet comes from the school of identity hard knocks, and is something I’ve learned from being in the IAM space for the past 20 or so years -- you have to manage the complete lifecycle for both the identity, and every account that the identity has access to.

Our job in identity management, our “place” if you will in the security ecosystem, is to provide cradle-to-grave management for corporate account assets. It's our job to manage and govern the full lifecycle of the identity -- a lifecycle that you’ll often hear referred to as JML, meaning Joiners, Movers and Leavers.

As you might expect, when gaps appear in that JML lifecycle, really bad things start to happen. Users don’t get the system access they need to get their jobs done, the wrong people get access to the wrong data and critical things get left behind when people leave.

Maybe the wrong people get access to the wrong data. They're in the Move phase. Then things get left behind when people leave. You have to track the account through that JML lifecycle. I avoid using the term "cradle to grave," but that’s really what it means.

That’s a very big issue for most companies that we talked to. It’s captured in that lifecycle.
In general, worker populations are becoming more transient and work groups more dynamic.

Gardner: So it’s not just orphan accounts, but it’s inaccurate or outdated accounts that don’t have the right and up-to-date information. Those can become back doors. Those can become weak links.

It appears to me, Darran, that there's another element here in how our workplace is changing. We're seeing more and more of what they call "contingent workforces," where people will come in as contractors or third-party suppliers for a brief period of time, do a job, and get out.

It’s this lean, agile approach to business. This also requires a greater degree of granularity and fine control. Do you have any thoughts about how this new dynamic workforce is impacting this particular tenet?

Rolls: It’s certainly increasing the pressure on IT to understand and manage all of its population of users, whether they're short-term contractors or long-term employees. If they have access to an asset that the business owns, it’s the business's fiduciary duty to manage the lifecycle for that worker.

In general, worker populations are becoming more transient and work groups more dynamic. Even if it’s not a new person joining the organization, we’re creating and using more dynamic groups of people that need more dynamic systems access.

It’s becoming increasingly important for businesses today to be able to put together the access that people need quickly when a new project starts and then accurately take it away when the project finishes. And if we manage that dynamic access without a high degree of assured governance, the wrong people get to the wrong stuff, and valued things get left behind.

Old account

Quite often, people ask me if it would really matter when the odd account gets left behind, and my answer usually is: It certainly can. A textbook example of this when a sales guy leaves his old company, goes to join a competitor, and no one takes away his salesforce.com account. He's then spends the next six months dipping into his old company’s contacts and leads because he still has access to the application in the cloud.

This kind of stuff happens all the time. In fact, we recently replaced another IDaaS provider at a client on the West Coast, specifically because “the other vendor” -- who shall remain nameless -- only did just-in-time SAML provisioning, with no leaver-based de-provisioning. So customers really do understand this stuff and recognize the value. You have to support the full lifecycle for identity or bad things happen for the customer and the vendor.

Gardner: All right. We were working our way through our tenets. We're now on number four. Is there a logical segue between three and four? How does four fit in?

Rolls: Number four, for me, is all about consistency. It talks to the fact that we have to think of identity management in terms of consistency for all users, as we just said, from all devices and accessing all of our applications.

Practically speaking, this means that whether you sit with your Windows desktop in the office, or you are working from an Android tablet back at the house, or maybe on your smartphone in a Starbucks drive-through, you can always access the applications that you need. And you can consistently and securely do something like a password reset, or maybe complete a quarterly user access certification task, before hitting the road back to the office.
It’s very easy to think of consistency as just being in the IAM UI or just in the device display, but it really extends to the identity API as well.

Consistency here means that you get the same basic user experience, and I use the term user experience here very deliberately, and the same level of identity service, wherever you are. It has become very, very important, particularly as we have introduced a variety of incoming devices, that we keep our IAM services consistent.

Gardner: It strikes me that this consistency has to be implemented and enforced from the back-end infrastructure, rather than the device, because the devices are so changeable. We're even thinking about a whole new generation of devices soon, and perhaps even more biometrics, where the device becomes an entry point to services.

Tell me a bit about the means by which consistency can take place. This isn't something you build into the device necessarily.

Rolls: Yes, that consistency has to be implemented in the underlying service, as you’ve highlighted. It’s very easy to think of consistency as just being in the IAM UI or just in the device display, but it really extends to the identity API as well. A very good example to explore this concept of consistency of the API, is to think like a corporate application developer and consider how they look at consistency for IAM, too.

Assume our corporate application developer is developing an app that needs to carry out a password reset, or maybe it needs to do something with an identity profile. Does that developer write a provisioning connector themselves? Or should they implement a password reset in their own custom code?

The answer is, no, they don’t roll their own. Instead they should make use of the consistent API-level services that the IAM platform provides -- they make calls to the IDaaS service. The IDaaS service is then responsible for doing the actual password reset using consistent policies, consistent controls, and a consistent level of business service. So, as I say, its about consistency for all use cases, from all devices, accessing all applications.

Thinking about consistency

Gardner: And even as we think about the back-end services support, that itself also needs to extend to on-prem legacy, and also to cloud and SaaS. So we're really thinking about consistency deep and wide.

Rolls: Precisely, and if we don’t think about consistency for identity as a services, we're never going to have control. And importantly, we're never going to reduce the cost of managing all this stuff, and we're never going to lower the true risk profile for the business.

Gardner: We're coming up or our last tenet, number five. We haven't talked too much about the behavior, the buy-in. You can lead a horse to water, but you can't make him drink. This, of course, has an impact on how we enforce consistency across all these devices, as well as the service model. So what do we need to do to get user buy-in? How does number five affect that?

Rolls: Number five, for me, is the idea that the end-user experience for identity is everything. Once upon a time, the only user for identity management was IT itself and identity was an IT tool for IT practitioners. It was mainly used by the help desk and by IT pros to automate identity and access controls. Fortunately, things have changes a lot since then, both in the identity infrastructure and, very importantly, in the end users’ expectations.
The expectation is to move the business user to self service for pretty much everything, and that very much includes Identity Management as a Service as well.

Today, IAM really sits front and center for the business users IT experience. When we think of something like single sign-on (SSO), it literally is the front door to the applications and the services that the business is running. When a line-of-business person sits down at an application, they're just expecting seamless access via secured single sing-on. The expectation is that they can just quickly and easily get access to the things they need to get their job done.

They also expect identity-management services, like password management, access request, and provisioning to be integrated, intuitive, and easy to use. So the way these identity services are delivered in the user experience is very important.

Pretty much everything is self-service these days. The expectation is to move the business user to self-service for pretty much everything, and that very much includes Identity Management as a Service (IDaaS) as well. So the UI just has to be done right and the overall users’ experience has to be consistent, seamless, intuitive, and just easy to deal with. That’s how we get buy-in for identity today, by making the identity management services themselves easy to use, intuitive, and accessible to all.

Gardner: And isn’t this the same as saying making the governance infrastructure invisible to the end user? In order to do that, you need to extend across all the devices, all the deployment models, and the APIs, as well as the legacy systems. Do you agree that we're talking about making it invisible, but we can’t do that unless you're following the previous four tenets?

Rolls: Exactly. There's been a lot of industry conversation around this idea of identity being part of the application and the users’ flow, and that’s very true. Some large enterprises do have their own user-access portals, specific places that you go to carry out identity-related activities, so we need integration there. On the other hand, if I'm sitting here talking to you and I want to reset my Active Directory password, I just want to pick up my iPhone and do it right there, and that means secure identity API’s.

We talked a good amount about the business user experience. It is very important to realize that it’s not just about the end-user and the UI. It also affects how the IDaaS service itself is configured, deployed, and managed over time. This means the user experience for the system owner, be that someone in IT or in the line of business -- it doesn’t really matter who -- has to be consistent and easy to use and has to lead to easier configuration, faster deployment, and faster time-to-value. We do that by making sure that the administration interface and the API’s that support it are consistent and generally well thought out, too.

Intersect between tenets

Gardner: I can tell, Darran, that you've put an awful lot of thought into these tenets. You've created them with some order, even though they're equally important. This must be also part of how you set about your requirements for your own products at SailPoint.

Tell me about the intersect between these tenets, the marketplace, and what SailPoint is bringing in order to ameliorate the issues that the problem side of these tenets identify, but also the solution side, in terms of how to do things well.

Rolls: You would expect every business to say these words, but they have great meaning for us. We're very, very customer focused at SailPoint. We're very engaged with our customers and our prospects. We're continually listening to the market and to what the buying customer wants. That’s the outside-in part of the of the product requirements story, basically building solutions to real customer problems.

Internally, we have a long history in identity management at SailPoint. That shows itself in how we construct the products and how we think about the architecture and the integration between pieces of the product. That’s the inside-out part of the product requirements process, building innovative products that solutions that work well over time.
As SailPoint has strategically moved into the IDaaS space, we’ve brought with us a level of trust, a breadth of experience, and a depth of IAM knowledge.

So I guess that all really comes down to good internal product management practices. Our product team has worked together for a considerable time across several companies. So that’s to be expected. It's fair to say that SailPoint is considered by many in the industry as the thought leader on identity governance and administration. We now work with some of the largest and most trusted brand names in the world, helping them provide the right IAM infrastructure. So I think we’re getting it right.

As SailPoint has strategically moved into the IDaaS space, we’ve brought with us a level of trust, a breadth of experience, and a depth of IAM knowledge that shows itself in how we use and apply these tenets of identity in the products and the solutions that we put together for our customers.

Gardner: Now, we talked about the importance of being legacy-sensitive, focusing on what the enterprise is and has been and not just what it might be, but I'd like to think a little bit about the future-proofing aspects of what we have been discussing.

Things are still changing and, as we said, there are new generations of mobile devices, more biometrics perhaps doing away with passwords and identifying ourselves through the device that then needs to filter back throughout the entire lifecycle of IAM implications and end points.

So when you do this well, if you follow the five tenets, if you think about them and employ the right infrastructure to support governance in IAM for both the old and the new, how does that set you up to take advantage of some of the newer things? Maybe it’s big data, maybe it’s hybrid cloud, or maybe it's agile business.

It seems to me that there's a virtuous adoption benefit that when you do IAM well.

Changes in technologies

Rolls: As you've highlighted, there are lots of new technologies out there that are effecting change in corporate infrastructure. In itself, that change isn’t new. I came into IT with the advent of distributed systems. We were going to replace every mainframe. Mainframes were supposed to be dead, and it's kind of interesting that they're still here.

So infrastructure change is most definitely accelerating, and the options available for the average IT business these days -- cloud, SaaS and on-prem -- are all blending together. That said, when you look below the applications, and look at the identity infrastructure, many things remain the same. Consider a SaaS app like Salesforce.com. Yes, it’s a 100 percent SaaS cloud application, but it still has an account for every user.

I can provide you with SSO to your account using SAML, but your account still has fine-grained entitlements that need to be provisioned and governed. That hasn’t changed. All of the new generation of cloud and SaaS applications require IAM. Identity is at the center of the application and it has to be managed. If you adopt a mature and holistic approach to that management you are in good stead.
If you're not on board, you'd better get on board, because the challenges for identity are certainly not going away.

Another great example are the mobile device management (MDM) platforms out there -- a new piece of management infrastructure that has come about to manage mobile endpoints. The MDM platforms themselves have identity control interfaces. Its our job in IAM to connect with these platforms and provide control over what’s happening to identity on the endpoint device, too.

Our job in identity is to manage identity lifecycles where ever they sit in the infrastructure. If you're not on board, you'd better get on board, because the challenges for identity are certainly not going away.

Interestingly, I'm sometimes challenged when I make a statement like that. I’ll often get the reply that "with SAML single sign-on, the the passwords go away so the account management problem goes away, right?” The answer is that no, they don’t. They're still accounts in the application infrastructure. So good best practice identity and access management will remain key as we keep moving forward.

Gardner: And of course as you pointed out earlier, we can expect the scale of what's going to be involved here to only get much greater.

Rolls: Yes, 100 percent. Scale is key to architectural thinking when you build a solution today, and we're really only just starting to touch where scale is going to go.

It’s very important to us at SailPoint, when we build our solutions, that the product we deliver understands the scale of business today and the scale that is to come. That affects how we design and integrate the solutions, it affects how they are configured and how they are deployed. It’s imperative to think scale -- that’s certainly something we do.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SailPoint Technologies.

You may also be interested in:

Friday, October 24, 2014

Large Russian bank, Otkritie Bank, turns to big data analysis to provide real-time financial insights

The next BriefingsDirect deep-dive big data benefits case study interview explores how Moscow-based Otkritie Bank, one of the largest private financial services groups in Russia, has built out a business intelligence (BI) capability for wholly new business activity monitoring (BAM) benefits.

The use of HP Vertica as a big data core to the BAM infrastructure provides Otkritie Bank improved nationwide analytics and a competitive advantage through better decision-making based on commonly accepted best information that's updated in near real-time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about Otkritie Bank's drive for improved instant analytics, BriefingsDirect sat down with Alexei Blagirev, Chief Data Officer at Otkritie Bank, at the recent HP Big Data 2014 Conference in Boston. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your choice for BI platforms. 

Blagirev: Otkritie Bank is a member of the Open Financial Corporation (now Otkritie Financial Corporation Bank), which is one of the largest private financial services groups in Russia. The reason we selected HP Vertica was that we tried to establish a data warehouse that could provide operational data storage and could also be an analytical OLAP solution.

Blagirev
It was a very hard decision. We tried to refer to the past experience from our team, from my side, etc. Everyone had some negative experience with different solutions like Oracle, because there was a big constraint.

We cannot integrate operational data storage and OLAP solutions. Why? Because there should be high transactional data put in the data warehouse (DWH), which in every case, was usually the biggest constraint to build high-transactional data storage.

Vertica was a very good solution that removed this constraint. While selecting Vertica, we were also evaluating different solutions like IBM. We identified advantages of Vertica against IBM from two different perspectives.

One was performance. The second was that Vertica is cost-efficient. Since we were comparing Netezza (now part of IBM), we were comparing not only software, but also software plus hardware. You can’t build a cluster of Netezza custom-size. You can only build it with 32 terabytes, and so on.

Very efficient

We were also limited by the logistics of these buildings blocks, the so-called big green box of Netezza. In terms of Vertica, it's really efficient, because we can use any hardware.

So we calculated our total cost of ownership (TCO) on a horizon of five years, and it was lower than if we built the data warehouse with different solutions. This was the reason we selected Vertica.
Fully experience the HP Vertica analytics platform ...
Become a member of myVertica.
From the technical perspective and from the cost-efficient perspective, there was a big difference in the business case. Our bank is not a classical bank in the Russian market, because in our bank the technology team leads the innovation, and the technology team is actually the influence-maker inside the business.

So, the business was with us when we proposed the new data warehouse. We proposed to build the new solution to collect all data from the whole of Russia and to organize via a so-called continuous load. This means that within the day, we can show all the data, what’s going on with the business operations, from all line of business inside all of Russia. It sounds great.

When we were selecting HP Vertica, we selected not only Vertica, but the technical bundle. We also hosted the Replicator. We chose Oracle GoldenGate.

We selected the appropriate ETL tool, and the BI front end. So all together, it was a technical bundle, where Vertica was the middleware technical solution. So far, we have build a near-real-time DWH, but we don’t call it near-real-time; we call it "just-in-time, because we want to be congruent with the decision-making process. We want to influence the business to let them think more about their decisions and about their business processes.
Everything appears really quick and it's actually influencing business to make decisions, to think more, and to think fast.

As of now, I can show all data collected and put inside the DWH within 15 minutes and show the first general process in the bank, the process of the loan application. I can show the number of created applications, plus online scoring and show how many customers we have at that moment in each region, the amounts, the average check, the approval rate, and the booking rate. I can show it to the management the same day, which is absolutely amazing.

The tricky part is what the business will do with this data. It's tricky, because the business was not ready for this. The business was actually expecting that they could run a script, go to the kitchen, make a coffee, and then come back.

But, boom, everything appears really quickly, and it's actually influencing the business to make decisions, to think more, and to think fast. This, I believe, is the biggest challenge, to grow business analytics inside the business for those who will be able to use this data.

As of now, we are setting the pilot stage, the pilot phase of what we call business activity monitoring (BAM). This is actually a funny story, because this is the same term referenced in Russia to Baikal-Amur Mainline (BAM), a huge railroad across the whole country that connects all the cities. It's kind of our story, too; we connect all departments and show the data in near real-time.

Next phase

In this case, we're actually working on the next phase of BAM, and we're trying to synchronize the methodology across all products, across all departments, which is very hard. For example, approval rates could be calculated differently for the credit cards or for the cash loans because of the process.

Since we're trying to establish a BI function almost from ground zero, HP Vertica is only the technical side. We need to think more about the educational side, and we need to think about the framework side. The general framework that we're trying to follow, since we're trying to build a BI function, is a United Business Glossary (or accepted services directory), first of all.

It's obvious to use Business Glossary and to use a single term to refer to the same entity everywhere. But it is not happening as of now, because the business unit is still trying to use different definitions. I think it's a common problem everywhere in the business.

The second is to explain that there are two different types of BI tools. One is BI for the data mart, a so-called regular report. Another tool is a data discovery tool. It's the tool for the data lab (i.e. mining tool).
Fully experience the HP Vertica analytics platform ...
Become a member of myVertica.
So we differentiate data lab from data mart. Why? Because we're trying to build a service-oriented model, which in the end produces analytical services, based on the functional map.

When you're trying to answer the question using some analytics, actually it is a regular question, this is tricky. All the questions that are raised by the business, by any business analyst, are regular questions; they are fundamental. 

The correct way to develop an analytical service is to collect all these questions into kind of a question library. You can call it a functional map and such, but these questions, define the analytical service for those functions.

For example, if you're trying to produce cost control, what kind of business questions do you want to answer? What kind of business analytics or metrics do you want to bring to the end-users? Is this really mapped to the question raised, or you are trying to present different analytics? As of now, we feel it's difficult to present this approach. And this is the first part.

The second part is a data lab for ad hoc data discovery. When, for example, you're trying to produce a marketing campaign for the customers, trying to produce customer segments, trying to analyze some great scoring methodology, or trying to validate scientific expectations, you need to produce some research.

It's not a regular activity. It's more ad hoc analysis, and it will use different tools for BI. You can’t combine all the tools and call it a universal BI tool, because it doesn't work this way. You need to have a different tool for this.

Creating a constraint

This will create a constraint for the business users, because they need some education. In the end, they need to know many different BI tools.

This is a key constraint that we have now, because end-users are more satisfied to work with Excel, which is great. I think it's the most popular BI data discovery tool in the world, but it has its own constraints.

I love Microsoft. Everyone loves Microsoft, but there are different beautiful tools like TIBCO Spotfire, for example, which combines MATLAB, R, and so on. You can input models of SAS and so on. You can also write the scripts inside it. This is a brilliant data discovery tool.

But try to teach this tool to your business analyst. In the beginning, it's hard, because it's like a J curve. They will work through the valley of despair, criticizing it. "Oh my God, what are you trying to create, because this is a mess from my perspective?" And I agree with them in the beginning, but they need to go through this valley of despair, because in the end, there will be really good stuff. This is because of the cultural influence.
This will create a constraint for the business users, because they need some education. In the end, they need to know many different BI tools.

Gardner: Tell me, Alexei, what sort of benefits have you been able to demonstrate to your banking officials, since you've been able to get this near real-time, or just-in-time analytics -- other than the fact that you're giving them reports? Are there other paybacks in terms of business metrics of success?

Blagirev: First of all, we differentiate our stakeholders. We have top management stakeholders, which is the board. There are the middle-level stakeholders, which are our regional directors.

I'll start from the bottom, and the regional directors. They just open the dashboard. They don’t click anything or refresh. They just see that they have data and analytics, what’s going on in their region.

They don’t care about the methodology, because there is BAM, and they just use figures for decision making. You don’t think about how it got there, but you think about what to do with these figures. You focus more on your decision, which is good.

They start to think more on their decision and they start to think more on the processing side. We may show, for example, that at 12 o’clock our stream of cash loan applications went down. Why? I have no idea. Maybe they all went out for dinner. I don’t know.

But nobody says that. They say, "Alexei, something is happening." They see true figures and they know they are true figures. They have instruments to exercise operational excellence. This is the first benefit.

Top management

The second, is top management. We had a management board where everyone came and showed different figures. We'd spend 30 minutes, or maybe hour, just debating which figures were true. I think this is a common situation in Russian banks, and maybe not only banks.

Now, we can just open the report, and I say, "This is a single report, because it shows intra-day figures and shows this metrics, it was calculated according to methodology." We actually linked the time of calculation, which shows that this KPI, for example, was calculated at 12 o’clock. You can take figures at 12 o’clock, and if you don’t believe them, you can ask the auditors to repeat calculation, and it will be the same way.

Nobody cares about how to calculate the figures. So they started to think about what methodology to apply to the business process. Actually, this is reverse of the focus from the outside, focusing on what’s going on with our business process. This is the second benefit.

Gardner: Any other advice that you would give to organizations who are beginning a process toward BI?
Try to disclose all your company and software vision, because Vertica or other BI tools are only a part. Try to see all the company's lines, all information.

Blagirev: First of all, don’t be afraid to make mistakes. It's a big thing, and we all forget that, but don’t be afraid. Second, try to create your own vision of strategy for at least one year.

Third, try to disclose all your company and software vision, because HP Vertica or other BI tools are only a part. Try to see all the company's lines, all information, because this is important. You need to understand where the value is, where is the shareholder value is lost, or are you creating the value for the shareholder. If the answer is, yes, don’t be afraid to protect your decision and your strategy, because otherwise in the end, there will be problems. Believe me.

As Gandhi mentioned, in the beginning everyone laughs, then they begin hating you, and in the end, you win.

Gardner: With your business activity monitoring, you've been able to change business processes, influence the operations, and maybe even the culture of the organization, focusing on the now and then the next set of processes. Doesn’t this give you a competitive advantage over organizations that don’t do this?

Blagirev: For sure. Actually, this gives a competitive advantage, but this competitive advantage depends on the decision that you're making. This actually depends on everyone in the organization.

Understanding this brings a new value to the business, but this depends on the final decision from people who sit in the position. Now, those people understand. They're actually handling the business and they see how they're handling the business.
Fully experience the HP Vertica analytics platform ...
Become a member of myVertica.
I can compare the solution to other banks. I have been working for Société Générale and for the Alfa-Bank, which is the largest bank in Russia. I've been the auditor of financial services in PwC. I saw the different reporting and different processes, and I can say that this solution is actually unique in the market.

Why? It shows congruent information in near real-time, inside the day, for all the data, for the whole of Russia. Of course, it brings benefit, but you need to understand how to use it. If you don’t understand how to use this benefit, it's going to be just a technical thing.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: