Tuesday, October 22, 2013

Complex carrier network performance data on HP Vertica yields performance and customer metrics boon for Empirix

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series explores how network testing, monitoring, and analytics provider Empirix developed unique and powerful data processing capabilities.

Empirix uses an advanced analytics engine to continuously and proactively evaluate carrier network performance and customer experience metrics -- amid massive data flows -- to automatically identify issues as they emerge.

To learn more about how a combination of large-scale, real-time performance and pervasive data access made the HP Vertica analytics platform stand out to support such demands for Empirix, join Navdeep Alam, Director of Engineering, Analytics and Prediction at Empirix, based in Billerica, Mass.

The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why do you have such demanding requirements for data processing and analysis?

Alam: What we do is actively and passively monitor networks. When you're in a network as a service provider, you have the opportunity to see the packets within that network, both on the control plane and on the user plane. That just means you're looking at signaling data and also user plane data -- what's going on with the behavior; what's going at the data layer. That’s a vast amount of data, especially with mobile, and most people doing stuff on their devices with data.

Alam
When you're in that network and you're tapping that data, there is a tremendous amount of data -- and there's a tremendous amount of insights about not only what's going on in the network, but what's going on with the subscribers and users of that network.

Empirix is able to collect this data from our probes in the network, as well as being able to look at other data points that might help augment the analysis. Through our analytics platform we're able to analyze that data, correlate it, mediate it, and drive metrics out of that data.

That’s a service for our customers, increasing value from that data, so that they can turn around a return on investment (ROI) and understand how they can leverage their networks better to increase operations and so forth. They can understand their customers better and begin to analyze, slice and dice, and visualize data of this complex network.

They can use our platform, as well to do proactive and predictive analysis, so that we can create even better ROI for our customers by telling them what potentially might go wrong and what might be the solution to get around that to avoid a catastrophe.

New opportunities

Gardner: It’s interesting that not only is this data being used for understanding the performance on the network itself, but it's giving people business development and marketing information about how people are using it and where the new opportunities might be.

Is that something fairly new? Were you able to do that with data before, or is it the scale and ability to get in there and create analysis in near-real-time that’s allowed for such a broad-based multilevel approach to data and analysis?

Alam: This is something we've gotten into. We definitely tried to do it before with success, but we knew that in order to really tackle mobile and the increasing demands of data, we really had to up the ante.

Our investment with HP Vertica and how we've introduced that in our new analytics platform, Empirix IntelliSight 1.0, that recently came out, is about leveraging that platform -- not only for scalability and our ability to ingest and process data, but to look at data in its more natural format, both as discrete data, and also as aggregate data. We allow our customers to view that data ad hoc and analyze that data.

It positioned us very well. Now that we have a central point from which all this data is being processed and analyzed, we now run analytics directly at this data, increasing our data locality and decreasing the data latency. This definitely ups our ante to do things much faster, in near real time.
We're right where the data is being generated, where it’s flowing, and because of that we're able to gain access to the data in real-time.

Gardner: Obviously, the sensors, probes, agents, and the ability to pull in the information from the network needs to reside or be at close proximity to the network, but how are you actually deployed? Where does the infrastructure for doing the data analysis reside? Is it in the networks themselves, or is there a remote site? Maybe you could just lay out the architecture of how this is set up.

Alam: We get installed on site. Obviously, the future could change, but right now we're an on-premise solution. We're right where the data is being generated, where it’s flowing, and because of that we're able to gain access to the data in real-time.

One of the things we learned is that this is a tremendous amount of data. It doesn't make sense for us to just hold it and assume that we will do something interesting with it afterward.

The way we've approached our customers is to say, "What kind of value do you seen in this data? What kind of metrics or key performance indicators (KPIs), or what do you think is valuable in this data? We then build a framework that defines the value that they can gain from data -- what are the metrics and what kind of structure they want to apply to this data. We're not just calculating metrics, but we're also applying some sort of model that gives this data some structure.

As they go through what we call the Empirix Intelligent Data Mediation and Correlation (IDMC) system, it's really an analytics calculator. It's putting our data into the Vertica system, so that at that point we have meaningful, actionable data that can be used to trigger alarms, to showcase thresholds, to give customers great insight to what's going on in their network.

Growing the business

From that, they can do various things, such as solve problems proactively, reach out to the customers to deal with those issues, or to make better investments with their technology in order to grow their business.

Gardner: How long have you been using Vertica and how did that come to be the choice that you made? Perhaps you could also tell us a little bit about where you see things going in terms of other capabilities that you might need or a roadmap for you?

Alam: We've been using Vertica for a few years, at least three or four, even before I came on-board. And we're using Vertica primarily for its ability to input and read data very quickly. We knew that, given our solutions, we needed to load a lot of data into the system and then read a lot of data out of it fast and to do it at the same time.

At that time, the database systems we used just couldn't meet the demands for the ever-growing data. So we leveraged Vertica there, and it was used more as an operational data store. When I came on board about a year-and-a-half ago, we wanted to evolve our use of Vertica to be not just for data warehousing, but a hybrid, because we knew that in supporting a lot of different types of data, it was very hard for us to structure all of those types of data.

We wanted to create a framework from which we can define measures and metrics and KPIs and store it in a more flat system from which we can apply various models to make sense of that data.
Ultimately, we wanted to allow customers to play with this data at will and to get response in seconds, not hours or minutes.

That really presented us a lot of challenges, not only in scalability, but our ability to work and play with data in various ways. Ultimately, we wanted to allow customers to play with this data at will and to get response in seconds, not hours or minutes.

It required us to look at how we could leverage Vertica as an intelligent data-storage system from which we could process data, store it, and then get answers out of that data very, very quickly. Again, we were looking for responses in a second or so.

Now that we've put all of our data in the data basket, so to speak, with Vertica, we wanted to take it to the next level. We have all this data, both looking at the whole data value chain from discrete data to aggregate data all in one place, with conforming dimensions, where the one truth of that data exists in one system.

We want to take it to the next step. Can we increase our analytical capabilities with the data? Can we find that signal from the noise now that we have all this data? Can we proactively find the patterns in the data, what's contributing to that problem, surface that to our customers, and reduce the noise that they are presented with.?

Solving problems

Instead of showing them that 50 things are wrong, can I show them that 50 things are wrong, but that these one or two issues are actually impacting your network or your subscribers the most? Can we proactively tell them what might be the cause or the reason toward that and how to solve it?

The faster we can load this data, the faster we can retrieve the value out of this data and find that needle in the haystack. That’s where the future resides for us.

Gardner: Clearly, you're creating value and selling insight to the network to your customers, but I know other organizations have also looked at data as a source of revenue in itself. The analysis could be something that you could market. Is there an opportunity with the insight you have in various networks -- maybe in some aggregate fashion -- to create analysis of behavior, network use, or patterns that would then become a revenue source for you, something that people would subscribe to perhaps?

Alam: That's a possibility. Right now, our business has been all about empowering our customers and giving them the ability to leverage that data for their end use. You can imagine, as a service provider, having great insight into their customers and the over-the-top applications that are being leveraged on their network.

Could they then use our analytics and the metadata that we're generating about their network to empower their business systems and their operations to make smarter decisions? Can they change their marketing strategy or even their APIs about how they service customers on their network to take advantage of the data that we are providing them?
The opportunity to grow other business opportunities from this data is tremendous, and it's going to be exciting to see what our customers end up doing with their data.

The opportunity to grow other business opportunities from this data is tremendous, and it's going to be exciting to see what our customers end up doing with their data.

Gardner: Are there any metrics of success that are particularly important for you. You've mentioned, of course, scale and volume, but things like concurrency, the ability to do queries from different places by different people at the same time is important. Help me understand what some of the other important elements of a good, strong data-analysis platform would be for you?

Alam: Concurrency is definitely important. For us it's about predictability or linear scalability. We know that when we do reach those types of scenarios to support, let’s say, 10 concurrent users or a 100 concurrent users, or to support a greater segmentation of data, because we have gone from 10 terabytes to 30 terabytes, we don't have to change a line of code. We don't have to change how or what we are doing with our data. Linear scalability, especially on commodity hardware, gives us the ability to take our solution and expand it at will, in order to deal with any type of bottlenecks.

Obviously, over time, we'll tune it so that we get better performance out of the hardware or virtual hardware that we use. But we know that when we do hit these bottlenecks, and we will, there is a way around that and it doesn't require us to recompile or rebuild something. We just have to add more nodes, whether it’s virtual or hardware.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Open FAIR certification launched

This guest post comes courtesy of Jim Hietala, The Open Group Chief of Security.

By Jim Hietala

The Open Group today announced the new Open FAIR Certification Program aimed at risk analysts, bringing a much-needed professional certification to the market that is focused on the practice of risk analysis. Both the Risk Taxonomy and Risk Analysis standards, standards of The Open Group, constitute the body of knowledge for the certification program, and they advance the risk analysis profession by defining a standard taxonomy for risk, and by describing the process aspects of a rigorous risk analysis.

Hietala
We believe that this new risk analyst certification program will bring significant value to risk analysts, and to organizations seeking to hire qualified risk analysts. Adoption of these two risk standards from The Open Group will help produce more effective and useful risk analysis. This program clearly represents the growing need in our industry for professionals who understand risk analysis fundamentals.  Furthermore, the mature processes and due diligence The Open Group applies to our standards and certification programs will help make organizations comfortable with the ground breaking concepts and methods underlying FAIR. This will also help professionals looking to differentiate themselves by demonstrating the ability to take a “business perspective” on risk.

In order to become certified, risk analysts must pass an Open FAIR certification exam. All certification exams are administered through Prometric, Inc. Exam candidates can start the registration process by visiting Prometric’s Open Group Test Sponsor Site www.prometric.com/opengroup.  With 4,000 testing centers in its IT channel, Prometric brings Open FAIR Certification to security professionals worldwide. For more details on the exam requirements visit http://www.opengroup.org/certifications/exams.

Available November 1

Training courses will be delivered through an Open Group accredited channel. The accreditation of Open FAIR training courses will be available from November 1, 2013.

Our thanks to all of the members of the risk certification working group who worked tirelessly over the past 15 months to bring this certification program, along with a new risk analysis standard and a revised risk taxonomy standard, to the market. Our thanks also to the sponsors of the program, whose support is important to building this program. The Open FAIR program sponsors are Architecting the Enterprise, CXOWARE, SNA, and The Unit.
Thanks to all of the members of the risk certification working group who worked tirelessly over the past 15 months to bring this certification program to the market.

Lastly, if you are involved in risk analysis, we encourage you to consider becoming Open FAIR certified, and to get involved in the risk analysis program at The Open Group. We have plans to develop an advanced level of Open FAIR certification, and we also see a great deal of best practices guidance that is needed by the industry.

For more information on the Open FAIR certification program visit http://www.opengroup.org/certifications/openfair

You may also wish to attend a webcast scheduled for 7th November, 4pm BST that will provide an overview of the Open FAIR certification program, as well as an overview of the two risk standards. You can register here.

This guest post comes courtesy of Jim Hietala, The Open Group Chief of Security. 

You may also be interested in:


Thursday, October 17, 2013

Democratic National Committee leverages big data to turn politics into political science

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series focuses on the big-data problem in the realm of politics. We'll learn how the Democratic National Committee (DNC) leveraged big data analytics to better understand and predict voter behavior and alliances in the 2012 U.S. national elections.

To learn more about how the DNC pulled vast amounts of data together to predict and understand voter preferences and positions on the issues, join Chris Wegrzyn, Director of Data Architecture at the DNC, based in Washington, DC.

The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.] 

 Here are some excerpts:
Gardner: Like a lot of organizations, you had different silos of data and information, and you weren't able to do the analysis properly because of the distributed nature of the data and information. What did you do that allowed you to bring all that data together, and then also get the data assembled to bring out better analysis?

Wegrzyn: In 2008, we received a lot of recognition at that time for being a data-driven campaign and making some great leaps in how we improved efficiency by understanding our organization.

Wegrzyn
Coming out of that, those of us on the inside were saying this was great, but we have only really skimmed the surface of what we can do. We focused on some sets of data, but they're not connected to what people were doing on our website, what people were doing on social media, or what our donors were doing. There were all of these different things, and we weren’t looking at them.

Really, we couldn’t look at them. We didn't have the staff structure, but we also didn't have the technology platform. It’s hard to integrate data and do it in a way that is going to give people reasonable performance. That wasn't available to us in 2008.

So, fast forward to where we were preparing for 2012. We knew that we wanted to be able to look across the organization, rather than at individual isolated things, because we knew that we could be smarter. It's pretty obvious to anybody. It isn’t a competitive secret that, if somebody donates to the campaign, they're probably a good supporter. But unless you have those things brought together, you're not necessarily pushing that information out to people, so that they can understand.

We were looking for a way that we could bring data together quickly and put it directly into the hands of our analysts, and HP Vertica was exactly that kind of solution for us. The speed and the scalability meant that we didn't have to worry about making sure that everything was properly transformed and didn't have to spend all of this time structuring data for performance. We could bring it together and then let our analysts figure it out using SQL, which is very powerful, but pretty simple to learn.

Better analytic platform

Gardner: Until the fairly recent past, it wasn't practical, both from a cost and technology perspective, to try to get at all the data. But it has gotten to that point now. So when you are looking at all of the different data that you can bring to bear on a national election, in a big country of hundreds of millions of people, what were some of the issues you faced?

Wegrzyn: We hadn’t done it before. We had to figure it out as we were going along. The most important realization that we made was that it wasn't going to be a huge technology effort that was going to make this happen. It was going to be about analysts. That’s a really generic term. Maybe it's data scientists or something, but it's about people who were going to understand the political challenges, understand something about the data, and go in and find answers.

We structured our organization around being analyst-centric. We needed to build those tools and platforms, so that they could start working immediately and not wait on us on the technology side to build the best system. It wasn’t about building the best system, but it was about getting something where we could prototype rapidly.

Nothing that we did was worth doing if we couldn't get something into somebody's hands in a week and then start refining it. But we had to be able to move very, very quickly, because we were just under a constant time-crunch.
That gave us the mission and the freedom to go in and start thinking how we could change how this operates.

Gardner: I would imagine that in the final two months and weeks of an election, things are happening very rapidly. To have a better sense of what the true situation on the ground is gives you an opportunity to best react to it.

It seems that in the past, it was a gut instinct. People were very talented and were paid very good money to be able to try to distill this insight from a perspective of knowledge and experience. What changed when you were able to bring the HP Vertica platform, big data, and real-time analysis to the function of an election?

Wegrzyn: Just about everything. There isn't a part of the campaign that was untouched by us, and in a lot of those places where gut ruled, we were able to bring in some numbers. This came down from the top campaign manager, Jim Messina. Out of the gate, he was saying that we have to put analytics in every part of the organization and we want to measure everything. That gave us the mission and the freedom to go in and start thinking how we could change how this operates.

But the campaign was driven. We tested emails relentlessly. A lot of our program was driven by trying to figure out what works and then quantify that and go out and do more. One of our big successes is the most traditional of the areas of campaigns nowadays, media buying.

More valuable

There have been a bunch of articles that have come up recently talking about what the campaign did. So I'm not giving anything away. We were able to take what we understood about the electorate and who we wanted to communicate with. Rather than taking the traditional TV buying approach, which was we're going to buy this broad demographic band, buy a lot of TV news, and we are going to buy a lot of the stuff that's expensive and has high ratings amongst the big demographics. That’s a lot of wasted money.

We were able to know more precisely who the people are that we want to target, which was the biggest insight. Then, we were able to take that and figure out -- not the super creepy "we know exactly what you are watching" level -- but at an aggregate level, what the people we want to target are watching. So we could buy that, rather than buying the traditional stuff. That's like an arbitrage opportunity. It’s cheaper for us, but it's way more valuable.

So we were able to buy the right stuff, because we had this insight into what our electorate was like, and I think it made a big difference in how we bought TV.

Gardner: The results of your big data activities are apparent. As I recall, Governor Romney's campaign, at one point, had a larger budget for media, and spent a lot of that. You had a more effective budget with media, and it showed.

Another indication was that on election night, right up until the exit polls were announced, the Republican side didn't seem to know very clearly or accurately what the outcome was going to be. You seemed to have a better sense. So the stakes here are extremely high. What’s going to be the next chapter for the coming elections, in two, and then four years along the cycle?
How do we empower them to use the tools that we used and the innovations that we created to improve their activity? It’s going to be a challenge.

Wegrzyn: That’s a really interesting question, and obviously it's one that I have had to spend a lot of time thinking about. The way that I think about the campaign in 2012 was one giant fancy office tower. We call it the Obama Campaign. When you have problems or decisions that have to be made, that goes up to the top and then back down. It’s all a very controlled process.

We are tipping that tower on its side now for 2014. Instead of having one big organization, we have to try to do this to 50, 100, maybe hundreds of smaller organizations that are going to have conflicting priorities. But the one thing that they have in common now is they saw what we did on the last campaign and they know that that's the future.

So what we have to do is take that and figure out how we can take this thing that worked very well for this one big organization, one centralized organization, and spread it out to all of these other organizations so that we can empower them.

They're going to have smaller staffs. They're going to have different programs. How do we empower them to use the tools that we used and the innovations that we created to improve their activity? It’s going to be a challenge.

Gardner: It’s interesting, there are parallels between what you're facing as a political organization, with federation, local districts for Congress, races in the state level, and then of course to the national offices as well. This is a parallel to businesses. Many businesses have a large centralized organization and they also have distributed and federated business units, perhaps in other countries for global companies.

Feedback loop

Is there a feedback loop here, whereby one level of success, like you well demonstrated in 2012, leads to more of the federated, on-the-ground, distributed gathering and utilization of data that also then feeds back to the larger organization, so that there's a virtual adoption pattern that will benefit across the ecosystem? Is that something you are expecting?

Wegrzyn: Absolutely. Even within the campaign, once people knew that this tool was available, that they could go into HP Vertica and just answer any question about the campaign's operation, it transformed the way that people were thinking about it. It increased people's interest in applying that to new areas. They were constantly coming at us with questions like, "Hey, can we do this?" We didn't know. We didn’t have enough staff to do that yet.

One of our big advantages is that we've already had a lot of adoption throughout campaigns of some of the data gathering. They understand that we have to gather this data. We don't know what we are going to do with it, but we have them understanding that we have to gather it. It's really great, because now we can start doing smart things with it.

And then they're going to have that immediate reaction like, "Wow, I can go in there now and I can figure out something smart about all of the stuff that I put in and all of the stuff that I have been collecting. Now I want more." So I think we're expecting that it will grow. Sometimes I lose sleep about how that’s going to just grow and grow and grow.

Gardner: We think about that virtuous adoption cycle, more-and-more types of data, all the data, if possible, being brought to bear. We saw at the Big Data Conference some examples and use cases for the HAVEn approach for HP, which includes Vertica, Hadoop, Autonomy IDOL, Security, and ArcSight types of products and services. Does that strike a chord with you that you need to get at the data, but now that definition of the data is exploding and you need to somehow come to grips with that?
Our future is bringing all of those systems, all of those ideas together, and exposing them to that fleet of analysts and everybody who wants it.

Wegrzyn: That's something that we only started to dabble in, things like text analysis, like what Autonomy can with that unstructured data, stuff that we only started to touch on on the campaign, because it’s hard. We make some use of Hadoop in various parts of our setup.

We're looking to a future, where we bring in more of that unstructured intelligence, that information from social media, from how people are interacting with our staff, with the campaign in trying to do something intelligent with that. Our future is bringing all of those systems, all of those ideas together, and exposing them to that fleet of analysts and everybody who wants it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, October 9, 2013

Need for quality and speed powers Sentara's applications modernization journey

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series highlights how Virginia Healthcare provider Sentara Healthcare has improved its IT operations and services delivery at higher quality and higher speed.

As part of its modernization journey, Sentara improved its IT service management (ITSM) maturity, making IT an internal business-service provider, and thereby deployed better monitoring of app services.

To learn more about how Sentara Healthcare excelled at application and data delivery and has progressed toward an automated lifecycle approach for high-performance applications management, join Jason Siegrist, Manager of Enterprise Management Technologies at Sentara. The discussion, which took place at the recent HP Discover 2013 Conference in Las Vegas, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Apps, of course, are always important, but in your business, healthcare, getting those apps so the people seems to be more important than in the past. How has the very notion of an application been changing for your users?

Siegrist: At Sentara Healthcare, and actually most healthcare organizations, the interest has been trying to get to electronic medical records (EMR) to make it easier and to reduce risks associated with caring for patients.

Patients are looking to get access to that data quicker, be able to see lab results in a timely manner, and be able to schedule appointments with doctors. We're trying to make those systems available to them in a secure way so that they're confident that their personal information is safe and protected.

Gardner: Tell us why maturity and progressing toward better application culture and behavior has been important for you.

Better healthcare decisions

Siegrist: In healthcare, the face of healthcare is still our doctors, nurses, and technical staff. However, we're trying to make sure we can enable those doctors and nurses to make better healthcare decisions and allow them to work interactively among each other, even when they're not in the same building.

Siegrist
Our environment has grown so significantly, even with things like X-rays being all digital these days. Now, a doctor can go back and review case studies, without having to wait to request those images and have them shipped. If someone is sitting in their office and they have an X-ray, they can go to priors very quickly.

So all these systems -- in Sentara there are about 17 of them -- have to be integrated in such a way that we guarantee that their work being collected and going to the right patient, and at the same time, when they're requesting information, they're getting the right patient data back.

Previously, every organization always looked at IT as being a very expensive cost center. We've been working very hard internally to change that discussion to be that we're enabling the business.
We've done that by doing some creative and unique processes. We bring in the pharmacist, for example. We make him the owner of the pharmacy app. Now, we have direct buy-in from a pharmacist who is a part of the IT process that selects the application and figures out how to integrate it.
We're trying to make sure we can enable those doctors and nurses to make better healthcare decisions.

Through that process, he's able to act as our champion in the pharmacy space and talk to his fellow pharmacists, saying "We have selected this, and I've been a part of that process." So we're involving them in the process, and at the same time, it's not an IT-focused or IT-forced initiative. We really are enabling business.

Gardner: Tell us about Sentara, how big it is, how many apps you have.

Siegrist: In the healthcare space, you measure it by hospitals. I think we're at 11 hospitals these days. We're always looking to expand and grow. We're out on the western edge of Virginia in the Blue Ridge Parkway area, as well as Hampton Roads and up to DC. So, we're in Virginia and a little bit in North Carolina.

Having these maturities in these processes has enabled us to include the business in the IT decisions. As we start building the monitoring, we start building the proactive analysis, in the troubleshooting. Our mean time to repair has gone down. We support larger populations with fewer staff, whether that's with internal systems or internal hardware. We built these automation processes and we built these systems with the idea that we want to be as lean as possible, and at the same time, deliver quality healthcare services.

Maturity roadmap

Gardner: It’s impressive to me too that you have charted out a maturity roadmap for yourselves and you've been in it for several years. Tell me where you evaluate yourself now and where you came from.

Siegrist: Like anybody, this really is an organizational learning process as well as a cultural shift and change. Several years ago, my boss, Betsy Meadows, had started the process about how we want to deploy ITIL. It all started around measuring network performance.

Ultimately, that grew into the idea that in order to do that, we have to do with network monitoring. We have to capture incidents and we have to capture that downtime, and by the way there is downtime that’s legitimate because we are doing maintenance.

Then, we had to think about how to capture maintenance events as downtime? So this process grew and grew. Over the last 8 to 10 years, we went from being very new in the process to where we are today. This is something every company goes through as far as maturation process.
As more and more young people under the workforce, they are coming with a predefined set of skills.

Today there is a scale out there. It says, 1 to 5. I’d say we are solidly 4-point something, if you do the math. But we have adopted a lot of processes at level 5 and at level 4. It’s allowed us to make smart decisions and make smart financial decisions as well.

Gardner: What have been some of the important tools that you've used to get there and what do you look to in terms of getting to that higher level of maturity? What are some of the ways that technology can come to bear on that?

Siegrist: Well, the reality is the workforce. As more and more young people under the workforce, they are coming with a predefined set of skills. I'm still young at 40, but my son can operate an iPad and he is three. He has no problems at all navigating that space.

The reality is that a younger workforce has an expectation of services and delivery. To that end, we're trying to enable our customers to have the ability to go out and do some of these things themselves. It's like an a la carte process, where they can say, "I want this level of monitoring. I want my application monitor this way. I’d like to see this dashboard here."

The application performance management suite that’s available from a software-as-a-service (SaaS) solution, has given us one more tool in our arsenal of solutions that allowed us to pass that out to the customer and say, "If you want to go make your monitor and you have a synthetic transaction or you want diagnostics-level knowledge about your application, here is a delivery channel to do that."

Gardner: You're a big user of HP. Tell us a little bit about the HP Business Services Management (BSM) suite, your involvement, and also the performance.

Several iterations

Siegrist: Ten years ago, we started out with HP Network Node Management (NNM), which is the network monitoring solution, and then moved into HP Open View (OVO), which is now called Operations Manager. So it’s been through several iterations, but over the last 10 years, we made lots of decisions about what tools to use.

We've always tried to go with best-of-breed where appropriate, and it happens to be that for us, the best-of-breed for us has been the HP solution set. It’s enabled us to get deeper into the applications and given us multiple ways to solve different problems.

Nothing is free in life. So we always want to try and give our customers options for which path they want to take and what level of the knowledge they want in the application space.

To this end, with the APM SaaS solution, it’s an operational expense. They don’t have to buy it in whole. They don’t have to deploy everything. They can just start. So, as I said It's an a-la-carte model. It let’s them just choose just a little or a lot, and then you can bite off the bigger pieces of pie that they're willing to tolerate.
The value is that the face of customer care in healthcare is still doctors and nurses.

Our customer base is interested in trying to have a way to interact with the doctors, and as more-and-more tablets and PCs and smartphones hit the market, we're looking for delivery solutions that provide that.

Our partner for our EMR is Epic. We use their solution for contacting and working with the doctors. It's called MyChart, and that tool gives them the ability to do that. As more-and-more of these devices get out there, the population gets younger. They have an expectation of service delivery through that channel, and Sentara is working to meet that expectation. This gives us the ability to monitor that application to make sure it's working properly.

Gardner: You mentioned earlier that it’s about SaaS and the ability to pick and choose the type of deployment model for your apps, services, and even infrastructure. Do you have any thoughts about where you're heading in terms of more choice in hybrid or cloud models?
We're trying to make sure that, as we move forward with monitoring these things in the data landing in the cloud, we are protecting patient data.

Siegrist: For most health organizations, and I'm probably in line here with my peers as well, there's always a concern about HIPAA. We're trying to make sure that, as we move forward with monitoring these things in the data landing in the cloud, we are protecting patient data. We are moving tentatively into that space and doing a little bit at a time to prevent and avoid any risk associated with patient data loss.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, October 2, 2013

Big data changes the customer analysis game for Yammer, Spil Games and Jobrapido

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series provides deep insights into how big data is changing the game around customer analytics.

This case study panel discussion highlights how various organizations are developing the means to develop far better analytics about their customers. Learn how high-performing and cost-effective big data processing enable a steep learning curve from customers on their wants and preferences.

The expert panel consists of Rob Winters, Director of Reporting and Analytics at Spil Games, based in Amsterdam; Davide Conforti, Business Intelligence Director at Jobrapido, based in Milan, and Pete Fishman, Director of Analytics at Yammer in San Francisco.

The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Businesses have been analyzing their customers for a long time. What’s different now?

Fishman
Fishman: We're a cloud software service, and the data is big. Our data on the customers is now all living in a central place. By aggregating across companies that are using your software, you can get really significant sample sizes and real inference, both from an economic sense, in terms of measuring the lift, but actually because the sample sizes are so big, you can get statistical inference.

That’s the starting point for making analytics valuable and learning about your customers.

Different problems

Winters: For me, the problem space is extremely different from what I was dealing with a couple of years back.

I was in telecom before this. There, you're dealing with 25 million people, and if you rescore them once a month, that’s fast enough. On a web scale problem, I'm dealing with 200 million customers and I have to rescore them within 10 or 15 minutes. So you're capturing significantly more data. We're looking at billions of records per day coming into our systems. We have to use it as fast as possible, because with the customer experience online, minutes matter.

Conforti
Conforti: It’s absolutely the same story with us. We have about 40 million unique visitors per month now. We've grown by double-digits since our start as a startup in 2006. Now, everything is about user interaction, how our users behave on-site, and how we can engage them more on-site and provide them a tremendous ad-hoc user experiences.

Winters: We're primarily a platform. We do some game development and publishing, but our core business is just being the platform where people can come and find content that’s interesting to them. We've been around for about nine years.

Winters
We started out as just a Dutch [gaming] company and then we've acquired other local domain names in a variety of languages. At this point, we have about 50 different platforms, running in about 20 different languages. So we support customers from all over the world. In a given month, we have over 200 countries with traffic onto our sites.

The entire business is changing, and you're competing based off that customer experience that you can deliver. We have a couple target audiences: girls, young girls, 8-14; boys; and then women.

Fishman: Yammer is a startup in San Francisco. We were acquired about a year ago by Microsoft and we're part of the larger Office organization. We view ourselves as enterprise social, taking this many-to-many communication model and making communication at your company much more efficient.

It's about surfacing relevant knowledge and experts and making work lives better. I run an analytics team there, and we essentially look at the aggregate customer behaviors and what parts of our tool people are using.

Social networks

This was a really revolutionary idea that our founders David Sacks and Adam Pisoni had, way back when Facebook wasn't nearly as relevant as it is today. But we've leveraged a lot of the way that people have learned to interact in their social life and bring some of that efficiency of communication. They saw that these social networks would grow and be relevant in a private, secured context of your business.
Conforti: Jobrapido started in 2006 as an entrepreneurial challenge that Vito Lomele, an Italian guy, started in Milan. It's quite a challenge to live in the online market in Italy, because talent pooling isn't as wide as in U.S. or in other countries in Europe. What we do is provide job-seekers the opportunity to find their new job.
What we do is provide jobseekers the opportunity to find their new job.

We're an online job-search engine and we currently operate in 58 different countries with more than 20 languages. We're all in this big headquarters in Milan with a lot of different nationalities, because of course, we provide the service in local languages for most of our customers.

Recently, we have been purchased by the Daily Mail group, a big media group based in London. For us, it's everything from job-seeker acquisition and retention and engagement deals with constant quality and user experience on-site. We use our big data warehouse in order to understand how to better attract and retain customers on the basis of their preferences. And we also use it to tweak our matching algorithm, which works more or less like a Google algorithm.

We crawl a lot of contents from different sources, both job boards and other job sites or directly in the working pages of individual companies. We put them together in a big database and, using statistical tools, we infer which kind of rankings our job-seekers are willing to see.

So it's a pretty heavy data crunching exercise that we do everyday on millions and millions of different sponsored or organic postings.

For example, if Yammer guys or if Spil Games guys want to hire a software engineer, they can directly promote their sponsored ads on Jobrapido without having to sponsor them on a job board. So we're trying to aggregate and simplify the chain of job search.

Gardner: What was the problem you had to solve when it comes to getting at this big data for analysis?
As you start to bring in different data sources, you start with all the stuff that you know you're going to need right away.

Winters: For me the challenge was multi-fold. How do you deal with this data problem, with this variety and volume information? How do you present it in a meaningful fashion for employees who've never looked at data before, so that they can make good decisions on it? And how do you run models against it and feed that back into a production environment as quickly as possible, so that you can give those customers a better experience than they were ever getting before on your platform?

My problem was that no one had ever tried to do it in my company before. We walked in with effectively a clean slate. But as you start to bring in different data sources, you start with all the stuff that you know you're going to need right away.

You start seeing needed links for other data sources. At this point, we're pulling data from thousands of databases, merging with dozens of application programming interfaces (APIs). You're pulling in your web log data, so that you can personalize for those folks who aren’t giving you registration information.

Large data

When we first started looking for a data warehouse appliance or application, we were running Postgres with no indices, just copies of production data. For data guys, that means that a query will take eight hours to execute. It's a table of a couple of million rows.

We knew that a typical row-based solution was out. So we started looking at some of the other applications out there. The big ones are Teradata, Exadata, and Greenplum, but you're going to have to mortgage the house of every employee in the company to be able to afford a license for those applications, and we're a pretty small company. So those were out.

Then, we started looking at some of the other boutique vendors like Infobright, and basically we saw that with HP Vertica, we can have relatively low load on our database administrator (DBA), so we can develop quickly without a lot of maintenance.

The pricing model fits what we need to achieve, and the performance is so good that we don't have to spend a ton of time on optimization now. We can basically move very rapidly along this path of becoming a data-driven organization without having to get held up on index optimization or trying to optimize our queries and rewrite paths.
We can just throw a lot of stuff into the system, smash it together, take the results, and get big wins for the company quickly.

We can just throw a lot of stuff into the system, smash it together, take the results, and get big wins for the company quickly.

We have a data center, and we do everything on our own private servers. For us, the next step is probably going to be moving more into a private-cloud model, and hopefully, Vertica will work in that environment as well.
Gardner: At Yammer, what was your big data problem and how did you solve it?

Fishman: Our problem set was that there were a lot of people trying to get into the enterprise social space. A lot of social networks are popping up, and essentially competing for attention at work is a challenge.

We felt that data was necessary to have a competitive advantage. David Sacks and Adam Pisoni had a vision of developing a consumer software company with rapid iteration. With that rapid iteration you get an extra advantage if you're able to reorient yourself based on what part of the product is working. Our data problems were largely about making data be a competitive advantage in our development methodology.

Gardner: What was it about Vertica that was instrumental to the point where you've adopted it? Is it a concurrency issue, a volume issue, speed, or all the above?

It's about speed

Fishman: It's all of the above, but the real highlight is always going to be about speed, especially, given the incredible competition for talent, not just in the Bay Area, but all over, especially in the data field.

Anybody that has data in their title is someone that’s highly sought after. That ability to minimize the cycle times for those folks who are such a challenge to keep and get excited about the projects that they're working on and is a tremendous solution that allows them to maximize their own abilities is really critical. It's the same in our space, and in software development in general.

When we take on these big risks and challenges, the ability to very quickly identify whether we're going in the right direction, and then reorienting where we are going, has been really critical to Yammer being successful.

Gardner: Davide, how did you get a handle on data problems?

Conforti: When I joined Jobrapido, we already ran tons of A/B tests, which are the lifeblood of our product innovation. We want to test everything, from changing the color or the font of one button to a different layout, because these have tremendous impact on improving the user engagement.
We really appreciate this flexibility and the high level of control that Vertica allows. This improved a lot our innovation throughput and it's going to improve it even more in the future./p>

Before, we used the Google Analytics tools, but we didn't like that much, because it's sample data, so you hardly reach statistically meaningful results. We decided to build a data warehouse to assure flexibility, performance, and also a higher level of control and data consistency. That's end-to-end control from the source, toward the visualization, in order to make them more actionable in terms of product development.

With Vertica, we did exactly this. We poured all the different data sources into one bucket, organized it, and now we have a full control over the data model. With my team, I manage these data models. It's fascinating how fast you can add pieces to the puzzle or remove others that are no longer interesting, because our business model, of course, is a living animal, a living creature.

We really appreciate this flexibility and the high level of control that Vertica allows. This improved a lot our innovation throughput and it's going to improve it even more in the future.

Currently, we crunch on Vertica about 30 GB of data everyday (i.e. we upload 30 GB/day on Vertica). But we're going to double it in a few months, because we're adding more stuff. We want to know more about the click patterns of our job-seekers on the site, and this is massive data flowing into Vertica. Also, our licensing in terabytes will likely double in the future.
Increased performance

Another hard fact that I can share with you guys is that every one of you using Vertica doesn't have to be satisfied with the first implementation of the query. If you're able to optimize it, you almost increase the performance of the query by more than 100 percent. This is my personal experience with consultants or advisers. Vertica is happy to provide the support, and this is really value-adding.
For me, it allowed me to actually do my job and have my team do their jobs, which is a pretty big metric of success.

Winters: As far as metrics of success, when we were doing our proof of concept (POC), we looked at primarily query performance. At that point, we weren’t looking at using it for prediction and personalization, but just for analytics and reporting.

What we saw was against an indexed Postgres database. We had done some optimization on the data. Our queries were running more than 1,000 percent faster, and Vertica was scaling pretty linearly, whereas with Postgres, when we put more data into the tables, they just started choking and just died completely.

For me, it allowed me to actually do my job and have my team do their jobs, which is a pretty big metric of success.

The other thing is that with a relatively small cluster, we can support hundreds of people and reports directly accessing the database, a dozen analysts or people who directly query information out of the database, and all of our personalization activities simultaneously with minimal performance hiccups. That’s a big metric of success.

Fishman: I have similar feedback as Rob, which is a comparing against a Postgres database. The speeds are at least one -- and probably closer to two or better -- order of magnitude faster. Certainly on the cost side, it's important with data to consider the whole cost. So this is sort of a theme.

End-to-end costs

There is a cost in a variety of managing and teasing out the useful insights that aren't necessarily in the sticker price. When considering a data solution, people should consider the end-to-end costs. What's really the cost per insight, as opposed to the cost per terabyte or the cost per whatever.

We certainly feel that Vertica has been our best solution. We've been customers for over three years. So it's quite a long relationship. I couldn’t imagine going back to a multi-day query, or something like that.

One thing that Davide mentioned is that he's forecasting how much data he will be putting into Vertica. I'm a forecaster myself by trade. Back in 2010, we were doing some estimates of where we would be by the end of 2011 in terms of our data volumes. This is a pretty simple extrapolation, and I got it wrong by at least an order of magnitude.
Tripping over really valuable insights can happen a lot more easily than when you're more naïve about it.

What we found is that when you start to get real insights from data, you want to get a little bit more, collect it maybe here or there. Also, as our product was growing, we faced some real exponential growth on the data and adopted clever solutions for maximizing that metric that we care about -- cost per insight, or minimizing the cost for insight.

There are many things going on simultaneously. So tripping over really valuable insights can happen a lot more easily than when you're more naïve about it. Essentially, you're facing headwinds in that. Finding insights become harder. At the same time, you have larger data volumes and some economies of scale there. So there are a lot of things simultaneously interacting, but clearly one thing to drive down that metric is best-in-breed tools.
Gardner: Of course, best to get the information of the people who can use it than to simply look to cut cost.

Fishman: Of course. If you view analytics as a cost center, that's the wrong view. It should be aimed at optimizing revenue streams. We micro-optimize the product, we micro-optimize sales and marketing, the business. Analytics is about improving everybody at their job, making data available to allow people to be more effective.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, October 1, 2013

Enterprise architecture: The key to cybersecurity

This guest post comes courtesy of Jason Bloomberg of ZapThink, a Dovel Technologies company.

By Jason Bloomberg


When I first discuss security in our Licensed ZapThink Architect (LZA) SOA course, I ask the class the following question: if a building had 20 exterior doors, and you locked 19 of them, would you be 95 percent secure? The answer to this 20-doors problem, of course, is absolutely not – you’d be 0 percent secure, since the bad guys are generally smart enough to find the unlocked door.

While the 20-doors problem serves to illustrate how important it is to secure your services as part of a comprehensive enterprise IT strategy, the same lesson applies to enterprise cybersecurity in general: applying inconsistent security policies across an organization leads to weaknesses hackers are only too happy to exploit. However, when we’re talking about the entire enterprise, the cybersecurity challenge is vastly more complex than simply securing all your software interfaces. Adequate security involves people, process, information, as well as technology. Getting cybersecurity right, therefore, depends upon enterprise architecture (EA).

Understanding the context for cybersecurity

A fundamental axiom of security is that we can never drive risk to zero. In other words, perfect security is infinitely expensive. We must therefore understand our tolerance for risk and our budget for addressing security, and ensure these two factors are in balance across the organization. Fundamentally, it is essential to build threats into your business model, and do so consistently.

Bloomberg
Credit card companies, for example, realize that despite their best efforts, there will always be a certain amount of fraud. True, they spend money to actively combat such fraud, but not as much as they could. Instead, they balance the budget for fighting such crime with the money lost through fraud in order to determine the acceptable level of risk.

In many organizations, however, the tolerance for risk and the budget for security are not in balance – or to be more precise, the balance is different in different departments or contexts across the enterprise. Part of this problem is due to the lottery fallacy, which we recently discussed in the context of big data. People tend to place an inordinate emphasis on improbable events. This fallacy frequently occurs in the context of risk, which is why we’re more worried about airplane crashes than car accidents, even though car crashes are far, far more likely.

But the lottery fallacy isn’t the only problem. Politics is a much greater issue. Department heads have their own ideas about tolerable risk in their fiefdoms, and the risk tolerance for one division may be very different from another. Furthermore, in most organizations, certain departments are responsible for security while others are not. Now department heads have a much more difficult time evaluating their level of risk and calculating their budget for security, as it’s someone else’s budget and supposedly someone else’s problem.

The solution to these challenges is the effective use of EA. You must think like an insurance company: undertake an objective analysis of the known risks and calculate the average cost of threats over all the activities in your organization. Just as an insurance company must be able to set their premiums high enough to cover losses on average, you must set your security budget high enough to cover your threats. Of course, sometimes a particular threat costs more than you expect, just as a catastrophic loss may cost more than a lifetime of premiums for the affected insurance customer. But the average still generally works out to your advantage.

With risk comes reward, but not all risks have the same promise of reward. In other words, some bets are better than others. Properly applied, EA can inform the organization about which bets have better expected returns than others, so that the organization can place its bets more rationally by distributing the risk across the organization in a fact-based manner.

Cybersecurity: dealing with change

Even organizations with robust EA efforts typically don’t leverage architecture to drive their cybersecurity strategies. The reason for this lack are diverse, and often include political and competence issues, but the most fundamental reason is because traditional EA doesn’t deal well with change. Cybersecurity is an inherently dynamic challenge: hackers keep inventing new attacks, new technologies continually introduce new vulnerabilities, and the interrelationship among the various trends in IT are increasingly convoluted, as we illustrate on our new ZapThink 2020 poster.

In contrast, the agile architecture approach I champion in my book, The Agile Architecture Revolution, calls for EA that focuses on change by explicitly working at the “meta” level: instead of simply architecting the things themselves, focus on architecting how those things change. For example, instead of focusing on the processes in the organization, architect the meta-processes:
The focus shouldn’t be on threats, but rather on how those threats might change.
processes for how processes change. Similarly, the role of software development isn’t simply to build to requirements. Instead, the focus should be on building systems that respond to changing requirements, what my book calls the meta-requirement of business agility.

So too with architecting for security. The focus shouldn’t be on threats, but rather on how those threats might change. At the technology level, this focus on change shifts the focus from a static “locked door” approach to security to the immune system metaphor I discussed last year. But there’s more to architecting for security than the technology. At the organizational level, effective EA will help resolve shadow IT issues which can lead to unmanaged security threats as an example. At the process level, EA will address social engineering challenges like phishing attacks. Securing your technology without applying a comprehensive, best practice approach to organizational and process security is tantamount to leaving some of your doors unlocked.

The ZapThink take

Remember the scene from Apollo 13, where the Flight Director goes around the room, asking each division leader for a go/no-go decision? Essentially, every division leader was a stakeholder in all important decisions, and any one of them had the ability to nix any idea with a thumbs-down. The thinking behind this approach was one of risk mitigation: only if there be a unanimous thumbs-up can the organization make a critical decision to take action.

Just so in the enterprise. Your EA should require the security team to be part of the planning for all systems (both human and technology) across the organization. Without EA, security tends to be an afterthought. Instead, security must be a stakeholder in all critical decisions across the enterprise.
By giving your enterprise architects the ability to offer thumbs-up or thumbs-down opinions on critical decisions, you are essentially saying that you mandate EA.

EA should also have a seat at the table, of course. By giving your enterprise architects the ability to offer thumbs-up or thumbs-down opinions on critical decisions, you are essentially saying that you mandate EA. And without such a mandate, architects find themselves in the proverbial ivory tower, creating artifacts and standards that the rank and file consider optional – which is a recipe for disaster. There’s no surer way to increase your cybersecurity risk than to treat EA as anything but absolutely necessary to the proper functioning of your organization.

This guest post comes courtesy of Jason Bloomberg of ZapThink, a Dovel Technologies company.

You may also be interested in: