Thursday, October 17, 2013

Democratic National Committee leverages big data to turn politics into political science

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series focuses on the big-data problem in the realm of politics. We'll learn how the Democratic National Committee (DNC) leveraged big data analytics to better understand and predict voter behavior and alliances in the 2012 U.S. national elections.

To learn more about how the DNC pulled vast amounts of data together to predict and understand voter preferences and positions on the issues, join Chris Wegrzyn, Director of Data Architecture at the DNC, based in Washington, DC.

The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.] 

 Here are some excerpts:
Gardner: Like a lot of organizations, you had different silos of data and information, and you weren't able to do the analysis properly because of the distributed nature of the data and information. What did you do that allowed you to bring all that data together, and then also get the data assembled to bring out better analysis?

Wegrzyn: In 2008, we received a lot of recognition at that time for being a data-driven campaign and making some great leaps in how we improved efficiency by understanding our organization.

Wegrzyn
Coming out of that, those of us on the inside were saying this was great, but we have only really skimmed the surface of what we can do. We focused on some sets of data, but they're not connected to what people were doing on our website, what people were doing on social media, or what our donors were doing. There were all of these different things, and we weren’t looking at them.

Really, we couldn’t look at them. We didn't have the staff structure, but we also didn't have the technology platform. It’s hard to integrate data and do it in a way that is going to give people reasonable performance. That wasn't available to us in 2008.

So, fast forward to where we were preparing for 2012. We knew that we wanted to be able to look across the organization, rather than at individual isolated things, because we knew that we could be smarter. It's pretty obvious to anybody. It isn’t a competitive secret that, if somebody donates to the campaign, they're probably a good supporter. But unless you have those things brought together, you're not necessarily pushing that information out to people, so that they can understand.

We were looking for a way that we could bring data together quickly and put it directly into the hands of our analysts, and HP Vertica was exactly that kind of solution for us. The speed and the scalability meant that we didn't have to worry about making sure that everything was properly transformed and didn't have to spend all of this time structuring data for performance. We could bring it together and then let our analysts figure it out using SQL, which is very powerful, but pretty simple to learn.

Better analytic platform

Gardner: Until the fairly recent past, it wasn't practical, both from a cost and technology perspective, to try to get at all the data. But it has gotten to that point now. So when you are looking at all of the different data that you can bring to bear on a national election, in a big country of hundreds of millions of people, what were some of the issues you faced?

Wegrzyn: We hadn’t done it before. We had to figure it out as we were going along. The most important realization that we made was that it wasn't going to be a huge technology effort that was going to make this happen. It was going to be about analysts. That’s a really generic term. Maybe it's data scientists or something, but it's about people who were going to understand the political challenges, understand something about the data, and go in and find answers.

We structured our organization around being analyst-centric. We needed to build those tools and platforms, so that they could start working immediately and not wait on us on the technology side to build the best system. It wasn’t about building the best system, but it was about getting something where we could prototype rapidly.

Nothing that we did was worth doing if we couldn't get something into somebody's hands in a week and then start refining it. But we had to be able to move very, very quickly, because we were just under a constant time-crunch.
That gave us the mission and the freedom to go in and start thinking how we could change how this operates.

Gardner: I would imagine that in the final two months and weeks of an election, things are happening very rapidly. To have a better sense of what the true situation on the ground is gives you an opportunity to best react to it.

It seems that in the past, it was a gut instinct. People were very talented and were paid very good money to be able to try to distill this insight from a perspective of knowledge and experience. What changed when you were able to bring the HP Vertica platform, big data, and real-time analysis to the function of an election?

Wegrzyn: Just about everything. There isn't a part of the campaign that was untouched by us, and in a lot of those places where gut ruled, we were able to bring in some numbers. This came down from the top campaign manager, Jim Messina. Out of the gate, he was saying that we have to put analytics in every part of the organization and we want to measure everything. That gave us the mission and the freedom to go in and start thinking how we could change how this operates.

But the campaign was driven. We tested emails relentlessly. A lot of our program was driven by trying to figure out what works and then quantify that and go out and do more. One of our big successes is the most traditional of the areas of campaigns nowadays, media buying.

More valuable

There have been a bunch of articles that have come up recently talking about what the campaign did. So I'm not giving anything away. We were able to take what we understood about the electorate and who we wanted to communicate with. Rather than taking the traditional TV buying approach, which was we're going to buy this broad demographic band, buy a lot of TV news, and we are going to buy a lot of the stuff that's expensive and has high ratings amongst the big demographics. That’s a lot of wasted money.

We were able to know more precisely who the people are that we want to target, which was the biggest insight. Then, we were able to take that and figure out -- not the super creepy "we know exactly what you are watching" level -- but at an aggregate level, what the people we want to target are watching. So we could buy that, rather than buying the traditional stuff. That's like an arbitrage opportunity. It’s cheaper for us, but it's way more valuable.

So we were able to buy the right stuff, because we had this insight into what our electorate was like, and I think it made a big difference in how we bought TV.

Gardner: The results of your big data activities are apparent. As I recall, Governor Romney's campaign, at one point, had a larger budget for media, and spent a lot of that. You had a more effective budget with media, and it showed.

Another indication was that on election night, right up until the exit polls were announced, the Republican side didn't seem to know very clearly or accurately what the outcome was going to be. You seemed to have a better sense. So the stakes here are extremely high. What’s going to be the next chapter for the coming elections, in two, and then four years along the cycle?
How do we empower them to use the tools that we used and the innovations that we created to improve their activity? It’s going to be a challenge.

Wegrzyn: That’s a really interesting question, and obviously it's one that I have had to spend a lot of time thinking about. The way that I think about the campaign in 2012 was one giant fancy office tower. We call it the Obama Campaign. When you have problems or decisions that have to be made, that goes up to the top and then back down. It’s all a very controlled process.

We are tipping that tower on its side now for 2014. Instead of having one big organization, we have to try to do this to 50, 100, maybe hundreds of smaller organizations that are going to have conflicting priorities. But the one thing that they have in common now is they saw what we did on the last campaign and they know that that's the future.

So what we have to do is take that and figure out how we can take this thing that worked very well for this one big organization, one centralized organization, and spread it out to all of these other organizations so that we can empower them.

They're going to have smaller staffs. They're going to have different programs. How do we empower them to use the tools that we used and the innovations that we created to improve their activity? It’s going to be a challenge.

Gardner: It’s interesting, there are parallels between what you're facing as a political organization, with federation, local districts for Congress, races in the state level, and then of course to the national offices as well. This is a parallel to businesses. Many businesses have a large centralized organization and they also have distributed and federated business units, perhaps in other countries for global companies.

Feedback loop

Is there a feedback loop here, whereby one level of success, like you well demonstrated in 2012, leads to more of the federated, on-the-ground, distributed gathering and utilization of data that also then feeds back to the larger organization, so that there's a virtual adoption pattern that will benefit across the ecosystem? Is that something you are expecting?

Wegrzyn: Absolutely. Even within the campaign, once people knew that this tool was available, that they could go into HP Vertica and just answer any question about the campaign's operation, it transformed the way that people were thinking about it. It increased people's interest in applying that to new areas. They were constantly coming at us with questions like, "Hey, can we do this?" We didn't know. We didn’t have enough staff to do that yet.

One of our big advantages is that we've already had a lot of adoption throughout campaigns of some of the data gathering. They understand that we have to gather this data. We don't know what we are going to do with it, but we have them understanding that we have to gather it. It's really great, because now we can start doing smart things with it.

And then they're going to have that immediate reaction like, "Wow, I can go in there now and I can figure out something smart about all of the stuff that I put in and all of the stuff that I have been collecting. Now I want more." So I think we're expecting that it will grow. Sometimes I lose sleep about how that’s going to just grow and grow and grow.

Gardner: We think about that virtuous adoption cycle, more-and-more types of data, all the data, if possible, being brought to bear. We saw at the Big Data Conference some examples and use cases for the HAVEn approach for HP, which includes Vertica, Hadoop, Autonomy IDOL, Security, and ArcSight types of products and services. Does that strike a chord with you that you need to get at the data, but now that definition of the data is exploding and you need to somehow come to grips with that?
Our future is bringing all of those systems, all of those ideas together, and exposing them to that fleet of analysts and everybody who wants it.

Wegrzyn: That's something that we only started to dabble in, things like text analysis, like what Autonomy can with that unstructured data, stuff that we only started to touch on on the campaign, because it’s hard. We make some use of Hadoop in various parts of our setup.

We're looking to a future, where we bring in more of that unstructured intelligence, that information from social media, from how people are interacting with our staff, with the campaign in trying to do something intelligent with that. Our future is bringing all of those systems, all of those ideas together, and exposing them to that fleet of analysts and everybody who wants it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, October 9, 2013

Need for quality and speed powers Sentara's applications modernization journey

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series highlights how Virginia Healthcare provider Sentara Healthcare has improved its IT operations and services delivery at higher quality and higher speed.

As part of its modernization journey, Sentara improved its IT service management (ITSM) maturity, making IT an internal business-service provider, and thereby deployed better monitoring of app services.

To learn more about how Sentara Healthcare excelled at application and data delivery and has progressed toward an automated lifecycle approach for high-performance applications management, join Jason Siegrist, Manager of Enterprise Management Technologies at Sentara. The discussion, which took place at the recent HP Discover 2013 Conference in Las Vegas, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Apps, of course, are always important, but in your business, healthcare, getting those apps so the people seems to be more important than in the past. How has the very notion of an application been changing for your users?

Siegrist: At Sentara Healthcare, and actually most healthcare organizations, the interest has been trying to get to electronic medical records (EMR) to make it easier and to reduce risks associated with caring for patients.

Patients are looking to get access to that data quicker, be able to see lab results in a timely manner, and be able to schedule appointments with doctors. We're trying to make those systems available to them in a secure way so that they're confident that their personal information is safe and protected.

Gardner: Tell us why maturity and progressing toward better application culture and behavior has been important for you.

Better healthcare decisions

Siegrist: In healthcare, the face of healthcare is still our doctors, nurses, and technical staff. However, we're trying to make sure we can enable those doctors and nurses to make better healthcare decisions and allow them to work interactively among each other, even when they're not in the same building.

Siegrist
Our environment has grown so significantly, even with things like X-rays being all digital these days. Now, a doctor can go back and review case studies, without having to wait to request those images and have them shipped. If someone is sitting in their office and they have an X-ray, they can go to priors very quickly.

So all these systems -- in Sentara there are about 17 of them -- have to be integrated in such a way that we guarantee that their work being collected and going to the right patient, and at the same time, when they're requesting information, they're getting the right patient data back.

Previously, every organization always looked at IT as being a very expensive cost center. We've been working very hard internally to change that discussion to be that we're enabling the business.
We've done that by doing some creative and unique processes. We bring in the pharmacist, for example. We make him the owner of the pharmacy app. Now, we have direct buy-in from a pharmacist who is a part of the IT process that selects the application and figures out how to integrate it.
We're trying to make sure we can enable those doctors and nurses to make better healthcare decisions.

Through that process, he's able to act as our champion in the pharmacy space and talk to his fellow pharmacists, saying "We have selected this, and I've been a part of that process." So we're involving them in the process, and at the same time, it's not an IT-focused or IT-forced initiative. We really are enabling business.

Gardner: Tell us about Sentara, how big it is, how many apps you have.

Siegrist: In the healthcare space, you measure it by hospitals. I think we're at 11 hospitals these days. We're always looking to expand and grow. We're out on the western edge of Virginia in the Blue Ridge Parkway area, as well as Hampton Roads and up to DC. So, we're in Virginia and a little bit in North Carolina.

Having these maturities in these processes has enabled us to include the business in the IT decisions. As we start building the monitoring, we start building the proactive analysis, in the troubleshooting. Our mean time to repair has gone down. We support larger populations with fewer staff, whether that's with internal systems or internal hardware. We built these automation processes and we built these systems with the idea that we want to be as lean as possible, and at the same time, deliver quality healthcare services.

Maturity roadmap

Gardner: It’s impressive to me too that you have charted out a maturity roadmap for yourselves and you've been in it for several years. Tell me where you evaluate yourself now and where you came from.

Siegrist: Like anybody, this really is an organizational learning process as well as a cultural shift and change. Several years ago, my boss, Betsy Meadows, had started the process about how we want to deploy ITIL. It all started around measuring network performance.

Ultimately, that grew into the idea that in order to do that, we have to do with network monitoring. We have to capture incidents and we have to capture that downtime, and by the way there is downtime that’s legitimate because we are doing maintenance.

Then, we had to think about how to capture maintenance events as downtime? So this process grew and grew. Over the last 8 to 10 years, we went from being very new in the process to where we are today. This is something every company goes through as far as maturation process.
As more and more young people under the workforce, they are coming with a predefined set of skills.

Today there is a scale out there. It says, 1 to 5. I’d say we are solidly 4-point something, if you do the math. But we have adopted a lot of processes at level 5 and at level 4. It’s allowed us to make smart decisions and make smart financial decisions as well.

Gardner: What have been some of the important tools that you've used to get there and what do you look to in terms of getting to that higher level of maturity? What are some of the ways that technology can come to bear on that?

Siegrist: Well, the reality is the workforce. As more and more young people under the workforce, they are coming with a predefined set of skills. I'm still young at 40, but my son can operate an iPad and he is three. He has no problems at all navigating that space.

The reality is that a younger workforce has an expectation of services and delivery. To that end, we're trying to enable our customers to have the ability to go out and do some of these things themselves. It's like an a la carte process, where they can say, "I want this level of monitoring. I want my application monitor this way. I’d like to see this dashboard here."

The application performance management suite that’s available from a software-as-a-service (SaaS) solution, has given us one more tool in our arsenal of solutions that allowed us to pass that out to the customer and say, "If you want to go make your monitor and you have a synthetic transaction or you want diagnostics-level knowledge about your application, here is a delivery channel to do that."

Gardner: You're a big user of HP. Tell us a little bit about the HP Business Services Management (BSM) suite, your involvement, and also the performance.

Several iterations

Siegrist: Ten years ago, we started out with HP Network Node Management (NNM), which is the network monitoring solution, and then moved into HP Open View (OVO), which is now called Operations Manager. So it’s been through several iterations, but over the last 10 years, we made lots of decisions about what tools to use.

We've always tried to go with best-of-breed where appropriate, and it happens to be that for us, the best-of-breed for us has been the HP solution set. It’s enabled us to get deeper into the applications and given us multiple ways to solve different problems.

Nothing is free in life. So we always want to try and give our customers options for which path they want to take and what level of the knowledge they want in the application space.

To this end, with the APM SaaS solution, it’s an operational expense. They don’t have to buy it in whole. They don’t have to deploy everything. They can just start. So, as I said It's an a-la-carte model. It let’s them just choose just a little or a lot, and then you can bite off the bigger pieces of pie that they're willing to tolerate.
The value is that the face of customer care in healthcare is still doctors and nurses.

Our customer base is interested in trying to have a way to interact with the doctors, and as more-and-more tablets and PCs and smartphones hit the market, we're looking for delivery solutions that provide that.

Our partner for our EMR is Epic. We use their solution for contacting and working with the doctors. It's called MyChart, and that tool gives them the ability to do that. As more-and-more of these devices get out there, the population gets younger. They have an expectation of service delivery through that channel, and Sentara is working to meet that expectation. This gives us the ability to monitor that application to make sure it's working properly.

Gardner: You mentioned earlier that it’s about SaaS and the ability to pick and choose the type of deployment model for your apps, services, and even infrastructure. Do you have any thoughts about where you're heading in terms of more choice in hybrid or cloud models?
We're trying to make sure that, as we move forward with monitoring these things in the data landing in the cloud, we are protecting patient data.

Siegrist: For most health organizations, and I'm probably in line here with my peers as well, there's always a concern about HIPAA. We're trying to make sure that, as we move forward with monitoring these things in the data landing in the cloud, we are protecting patient data. We are moving tentatively into that space and doing a little bit at a time to prevent and avoid any risk associated with patient data loss.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, October 2, 2013

Big data changes the customer analysis game for Yammer, Spil Games and Jobrapido

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series provides deep insights into how big data is changing the game around customer analytics.

This case study panel discussion highlights how various organizations are developing the means to develop far better analytics about their customers. Learn how high-performing and cost-effective big data processing enable a steep learning curve from customers on their wants and preferences.

The expert panel consists of Rob Winters, Director of Reporting and Analytics at Spil Games, based in Amsterdam; Davide Conforti, Business Intelligence Director at Jobrapido, based in Milan, and Pete Fishman, Director of Analytics at Yammer in San Francisco.

The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Businesses have been analyzing their customers for a long time. What’s different now?

Fishman
Fishman: We're a cloud software service, and the data is big. Our data on the customers is now all living in a central place. By aggregating across companies that are using your software, you can get really significant sample sizes and real inference, both from an economic sense, in terms of measuring the lift, but actually because the sample sizes are so big, you can get statistical inference.

That’s the starting point for making analytics valuable and learning about your customers.

Different problems

Winters: For me, the problem space is extremely different from what I was dealing with a couple of years back.

I was in telecom before this. There, you're dealing with 25 million people, and if you rescore them once a month, that’s fast enough. On a web scale problem, I'm dealing with 200 million customers and I have to rescore them within 10 or 15 minutes. So you're capturing significantly more data. We're looking at billions of records per day coming into our systems. We have to use it as fast as possible, because with the customer experience online, minutes matter.

Conforti
Conforti: It’s absolutely the same story with us. We have about 40 million unique visitors per month now. We've grown by double-digits since our start as a startup in 2006. Now, everything is about user interaction, how our users behave on-site, and how we can engage them more on-site and provide them a tremendous ad-hoc user experiences.

Winters: We're primarily a platform. We do some game development and publishing, but our core business is just being the platform where people can come and find content that’s interesting to them. We've been around for about nine years.

Winters
We started out as just a Dutch [gaming] company and then we've acquired other local domain names in a variety of languages. At this point, we have about 50 different platforms, running in about 20 different languages. So we support customers from all over the world. In a given month, we have over 200 countries with traffic onto our sites.

The entire business is changing, and you're competing based off that customer experience that you can deliver. We have a couple target audiences: girls, young girls, 8-14; boys; and then women.

Fishman: Yammer is a startup in San Francisco. We were acquired about a year ago by Microsoft and we're part of the larger Office organization. We view ourselves as enterprise social, taking this many-to-many communication model and making communication at your company much more efficient.

It's about surfacing relevant knowledge and experts and making work lives better. I run an analytics team there, and we essentially look at the aggregate customer behaviors and what parts of our tool people are using.

Social networks

This was a really revolutionary idea that our founders David Sacks and Adam Pisoni had, way back when Facebook wasn't nearly as relevant as it is today. But we've leveraged a lot of the way that people have learned to interact in their social life and bring some of that efficiency of communication. They saw that these social networks would grow and be relevant in a private, secured context of your business.
Conforti: Jobrapido started in 2006 as an entrepreneurial challenge that Vito Lomele, an Italian guy, started in Milan. It's quite a challenge to live in the online market in Italy, because talent pooling isn't as wide as in U.S. or in other countries in Europe. What we do is provide job-seekers the opportunity to find their new job.
What we do is provide jobseekers the opportunity to find their new job.

We're an online job-search engine and we currently operate in 58 different countries with more than 20 languages. We're all in this big headquarters in Milan with a lot of different nationalities, because of course, we provide the service in local languages for most of our customers.

Recently, we have been purchased by the Daily Mail group, a big media group based in London. For us, it's everything from job-seeker acquisition and retention and engagement deals with constant quality and user experience on-site. We use our big data warehouse in order to understand how to better attract and retain customers on the basis of their preferences. And we also use it to tweak our matching algorithm, which works more or less like a Google algorithm.

We crawl a lot of contents from different sources, both job boards and other job sites or directly in the working pages of individual companies. We put them together in a big database and, using statistical tools, we infer which kind of rankings our job-seekers are willing to see.

So it's a pretty heavy data crunching exercise that we do everyday on millions and millions of different sponsored or organic postings.

For example, if Yammer guys or if Spil Games guys want to hire a software engineer, they can directly promote their sponsored ads on Jobrapido without having to sponsor them on a job board. So we're trying to aggregate and simplify the chain of job search.

Gardner: What was the problem you had to solve when it comes to getting at this big data for analysis?
As you start to bring in different data sources, you start with all the stuff that you know you're going to need right away.

Winters: For me the challenge was multi-fold. How do you deal with this data problem, with this variety and volume information? How do you present it in a meaningful fashion for employees who've never looked at data before, so that they can make good decisions on it? And how do you run models against it and feed that back into a production environment as quickly as possible, so that you can give those customers a better experience than they were ever getting before on your platform?

My problem was that no one had ever tried to do it in my company before. We walked in with effectively a clean slate. But as you start to bring in different data sources, you start with all the stuff that you know you're going to need right away.

You start seeing needed links for other data sources. At this point, we're pulling data from thousands of databases, merging with dozens of application programming interfaces (APIs). You're pulling in your web log data, so that you can personalize for those folks who aren’t giving you registration information.

Large data

When we first started looking for a data warehouse appliance or application, we were running Postgres with no indices, just copies of production data. For data guys, that means that a query will take eight hours to execute. It's a table of a couple of million rows.

We knew that a typical row-based solution was out. So we started looking at some of the other applications out there. The big ones are Teradata, Exadata, and Greenplum, but you're going to have to mortgage the house of every employee in the company to be able to afford a license for those applications, and we're a pretty small company. So those were out.

Then, we started looking at some of the other boutique vendors like Infobright, and basically we saw that with HP Vertica, we can have relatively low load on our database administrator (DBA), so we can develop quickly without a lot of maintenance.

The pricing model fits what we need to achieve, and the performance is so good that we don't have to spend a ton of time on optimization now. We can basically move very rapidly along this path of becoming a data-driven organization without having to get held up on index optimization or trying to optimize our queries and rewrite paths.
We can just throw a lot of stuff into the system, smash it together, take the results, and get big wins for the company quickly.

We can just throw a lot of stuff into the system, smash it together, take the results, and get big wins for the company quickly.

We have a data center, and we do everything on our own private servers. For us, the next step is probably going to be moving more into a private-cloud model, and hopefully, Vertica will work in that environment as well.
Gardner: At Yammer, what was your big data problem and how did you solve it?

Fishman: Our problem set was that there were a lot of people trying to get into the enterprise social space. A lot of social networks are popping up, and essentially competing for attention at work is a challenge.

We felt that data was necessary to have a competitive advantage. David Sacks and Adam Pisoni had a vision of developing a consumer software company with rapid iteration. With that rapid iteration you get an extra advantage if you're able to reorient yourself based on what part of the product is working. Our data problems were largely about making data be a competitive advantage in our development methodology.

Gardner: What was it about Vertica that was instrumental to the point where you've adopted it? Is it a concurrency issue, a volume issue, speed, or all the above?

It's about speed

Fishman: It's all of the above, but the real highlight is always going to be about speed, especially, given the incredible competition for talent, not just in the Bay Area, but all over, especially in the data field.

Anybody that has data in their title is someone that’s highly sought after. That ability to minimize the cycle times for those folks who are such a challenge to keep and get excited about the projects that they're working on and is a tremendous solution that allows them to maximize their own abilities is really critical. It's the same in our space, and in software development in general.

When we take on these big risks and challenges, the ability to very quickly identify whether we're going in the right direction, and then reorienting where we are going, has been really critical to Yammer being successful.

Gardner: Davide, how did you get a handle on data problems?

Conforti: When I joined Jobrapido, we already ran tons of A/B tests, which are the lifeblood of our product innovation. We want to test everything, from changing the color or the font of one button to a different layout, because these have tremendous impact on improving the user engagement.
We really appreciate this flexibility and the high level of control that Vertica allows. This improved a lot our innovation throughput and it's going to improve it even more in the future./p>

Before, we used the Google Analytics tools, but we didn't like that much, because it's sample data, so you hardly reach statistically meaningful results. We decided to build a data warehouse to assure flexibility, performance, and also a higher level of control and data consistency. That's end-to-end control from the source, toward the visualization, in order to make them more actionable in terms of product development.

With Vertica, we did exactly this. We poured all the different data sources into one bucket, organized it, and now we have a full control over the data model. With my team, I manage these data models. It's fascinating how fast you can add pieces to the puzzle or remove others that are no longer interesting, because our business model, of course, is a living animal, a living creature.

We really appreciate this flexibility and the high level of control that Vertica allows. This improved a lot our innovation throughput and it's going to improve it even more in the future.

Currently, we crunch on Vertica about 30 GB of data everyday (i.e. we upload 30 GB/day on Vertica). But we're going to double it in a few months, because we're adding more stuff. We want to know more about the click patterns of our job-seekers on the site, and this is massive data flowing into Vertica. Also, our licensing in terabytes will likely double in the future.
Increased performance

Another hard fact that I can share with you guys is that every one of you using Vertica doesn't have to be satisfied with the first implementation of the query. If you're able to optimize it, you almost increase the performance of the query by more than 100 percent. This is my personal experience with consultants or advisers. Vertica is happy to provide the support, and this is really value-adding.
For me, it allowed me to actually do my job and have my team do their jobs, which is a pretty big metric of success.

Winters: As far as metrics of success, when we were doing our proof of concept (POC), we looked at primarily query performance. At that point, we weren’t looking at using it for prediction and personalization, but just for analytics and reporting.

What we saw was against an indexed Postgres database. We had done some optimization on the data. Our queries were running more than 1,000 percent faster, and Vertica was scaling pretty linearly, whereas with Postgres, when we put more data into the tables, they just started choking and just died completely.

For me, it allowed me to actually do my job and have my team do their jobs, which is a pretty big metric of success.

The other thing is that with a relatively small cluster, we can support hundreds of people and reports directly accessing the database, a dozen analysts or people who directly query information out of the database, and all of our personalization activities simultaneously with minimal performance hiccups. That’s a big metric of success.

Fishman: I have similar feedback as Rob, which is a comparing against a Postgres database. The speeds are at least one -- and probably closer to two or better -- order of magnitude faster. Certainly on the cost side, it's important with data to consider the whole cost. So this is sort of a theme.

End-to-end costs

There is a cost in a variety of managing and teasing out the useful insights that aren't necessarily in the sticker price. When considering a data solution, people should consider the end-to-end costs. What's really the cost per insight, as opposed to the cost per terabyte or the cost per whatever.

We certainly feel that Vertica has been our best solution. We've been customers for over three years. So it's quite a long relationship. I couldn’t imagine going back to a multi-day query, or something like that.

One thing that Davide mentioned is that he's forecasting how much data he will be putting into Vertica. I'm a forecaster myself by trade. Back in 2010, we were doing some estimates of where we would be by the end of 2011 in terms of our data volumes. This is a pretty simple extrapolation, and I got it wrong by at least an order of magnitude.
Tripping over really valuable insights can happen a lot more easily than when you're more naïve about it.

What we found is that when you start to get real insights from data, you want to get a little bit more, collect it maybe here or there. Also, as our product was growing, we faced some real exponential growth on the data and adopted clever solutions for maximizing that metric that we care about -- cost per insight, or minimizing the cost for insight.

There are many things going on simultaneously. So tripping over really valuable insights can happen a lot more easily than when you're more naïve about it. Essentially, you're facing headwinds in that. Finding insights become harder. At the same time, you have larger data volumes and some economies of scale there. So there are a lot of things simultaneously interacting, but clearly one thing to drive down that metric is best-in-breed tools.
Gardner: Of course, best to get the information of the people who can use it than to simply look to cut cost.

Fishman: Of course. If you view analytics as a cost center, that's the wrong view. It should be aimed at optimizing revenue streams. We micro-optimize the product, we micro-optimize sales and marketing, the business. Analytics is about improving everybody at their job, making data available to allow people to be more effective.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, October 1, 2013

Enterprise architecture: The key to cybersecurity

This guest post comes courtesy of Jason Bloomberg of ZapThink, a Dovel Technologies company.

By Jason Bloomberg


When I first discuss security in our Licensed ZapThink Architect (LZA) SOA course, I ask the class the following question: if a building had 20 exterior doors, and you locked 19 of them, would you be 95 percent secure? The answer to this 20-doors problem, of course, is absolutely not – you’d be 0 percent secure, since the bad guys are generally smart enough to find the unlocked door.

While the 20-doors problem serves to illustrate how important it is to secure your services as part of a comprehensive enterprise IT strategy, the same lesson applies to enterprise cybersecurity in general: applying inconsistent security policies across an organization leads to weaknesses hackers are only too happy to exploit. However, when we’re talking about the entire enterprise, the cybersecurity challenge is vastly more complex than simply securing all your software interfaces. Adequate security involves people, process, information, as well as technology. Getting cybersecurity right, therefore, depends upon enterprise architecture (EA).

Understanding the context for cybersecurity

A fundamental axiom of security is that we can never drive risk to zero. In other words, perfect security is infinitely expensive. We must therefore understand our tolerance for risk and our budget for addressing security, and ensure these two factors are in balance across the organization. Fundamentally, it is essential to build threats into your business model, and do so consistently.

Bloomberg
Credit card companies, for example, realize that despite their best efforts, there will always be a certain amount of fraud. True, they spend money to actively combat such fraud, but not as much as they could. Instead, they balance the budget for fighting such crime with the money lost through fraud in order to determine the acceptable level of risk.

In many organizations, however, the tolerance for risk and the budget for security are not in balance – or to be more precise, the balance is different in different departments or contexts across the enterprise. Part of this problem is due to the lottery fallacy, which we recently discussed in the context of big data. People tend to place an inordinate emphasis on improbable events. This fallacy frequently occurs in the context of risk, which is why we’re more worried about airplane crashes than car accidents, even though car crashes are far, far more likely.

But the lottery fallacy isn’t the only problem. Politics is a much greater issue. Department heads have their own ideas about tolerable risk in their fiefdoms, and the risk tolerance for one division may be very different from another. Furthermore, in most organizations, certain departments are responsible for security while others are not. Now department heads have a much more difficult time evaluating their level of risk and calculating their budget for security, as it’s someone else’s budget and supposedly someone else’s problem.

The solution to these challenges is the effective use of EA. You must think like an insurance company: undertake an objective analysis of the known risks and calculate the average cost of threats over all the activities in your organization. Just as an insurance company must be able to set their premiums high enough to cover losses on average, you must set your security budget high enough to cover your threats. Of course, sometimes a particular threat costs more than you expect, just as a catastrophic loss may cost more than a lifetime of premiums for the affected insurance customer. But the average still generally works out to your advantage.

With risk comes reward, but not all risks have the same promise of reward. In other words, some bets are better than others. Properly applied, EA can inform the organization about which bets have better expected returns than others, so that the organization can place its bets more rationally by distributing the risk across the organization in a fact-based manner.

Cybersecurity: dealing with change

Even organizations with robust EA efforts typically don’t leverage architecture to drive their cybersecurity strategies. The reason for this lack are diverse, and often include political and competence issues, but the most fundamental reason is because traditional EA doesn’t deal well with change. Cybersecurity is an inherently dynamic challenge: hackers keep inventing new attacks, new technologies continually introduce new vulnerabilities, and the interrelationship among the various trends in IT are increasingly convoluted, as we illustrate on our new ZapThink 2020 poster.

In contrast, the agile architecture approach I champion in my book, The Agile Architecture Revolution, calls for EA that focuses on change by explicitly working at the “meta” level: instead of simply architecting the things themselves, focus on architecting how those things change. For example, instead of focusing on the processes in the organization, architect the meta-processes:
The focus shouldn’t be on threats, but rather on how those threats might change.
processes for how processes change. Similarly, the role of software development isn’t simply to build to requirements. Instead, the focus should be on building systems that respond to changing requirements, what my book calls the meta-requirement of business agility.

So too with architecting for security. The focus shouldn’t be on threats, but rather on how those threats might change. At the technology level, this focus on change shifts the focus from a static “locked door” approach to security to the immune system metaphor I discussed last year. But there’s more to architecting for security than the technology. At the organizational level, effective EA will help resolve shadow IT issues which can lead to unmanaged security threats as an example. At the process level, EA will address social engineering challenges like phishing attacks. Securing your technology without applying a comprehensive, best practice approach to organizational and process security is tantamount to leaving some of your doors unlocked.

The ZapThink take

Remember the scene from Apollo 13, where the Flight Director goes around the room, asking each division leader for a go/no-go decision? Essentially, every division leader was a stakeholder in all important decisions, and any one of them had the ability to nix any idea with a thumbs-down. The thinking behind this approach was one of risk mitigation: only if there be a unanimous thumbs-up can the organization make a critical decision to take action.

Just so in the enterprise. Your EA should require the security team to be part of the planning for all systems (both human and technology) across the organization. Without EA, security tends to be an afterthought. Instead, security must be a stakeholder in all critical decisions across the enterprise.
By giving your enterprise architects the ability to offer thumbs-up or thumbs-down opinions on critical decisions, you are essentially saying that you mandate EA.

EA should also have a seat at the table, of course. By giving your enterprise architects the ability to offer thumbs-up or thumbs-down opinions on critical decisions, you are essentially saying that you mandate EA. And without such a mandate, architects find themselves in the proverbial ivory tower, creating artifacts and standards that the rank and file consider optional – which is a recipe for disaster. There’s no surer way to increase your cybersecurity risk than to treat EA as anything but absolutely necessary to the proper functioning of your organization.

This guest post comes courtesy of Jason Bloomberg of ZapThink, a Dovel Technologies company.

You may also be interested in:


Thursday, September 26, 2013

Application development efficiencies drive Agile payoffs for healthcare tech provider TriZetto

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series highlights how healthcare technology provider TriZetto has been improving its development processes and modernizing its ability to speed the applications lifecycle process.

To learn more about how quality and Agile methods tools better support a lifecycle approach to software, we sat down with Rubina Ansari, Associate Vice President of Automation and Software Development Lifecycle Tools at TriZetto.

The discussion, which took place at the recent HP Discover 2013 Conference in Las Vegas, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Where you are in terms of moving to Agile processes?

Ansari: TriZetto currently is going through an evolution. We’re going through a structured waterfall-to-scaled-Agile methodology. As you mentioned, that's one of the innovative ways that we're looking at getting our releases out faster with better quality, and be able to respond to our customers. We realize that Agile, as a methodology, is the way to go when it comes to all those three things I just mentioned.

We're currently in the midst of evolving how we work. We’re going through a major transformation within our development centers throughout the country.

TriZetto is a healthcare software provider. We have the software for all areas of healthcare. Our mission is to integrate different healthcare systems to make sure our customers have seamless information. Over 50 percent of the American insured population goes through our software for their claims processing. So, we have a big market and we want to stay there.
Leaner and faster

Our software is very important to us, just as it is to our customers. We're always looking for ways of making sure we’re leaner, faster, and keeping up with our quality in order to keep up with all the healthcare changes that are happening.

Gardner: You've been working with HP Software and Application Lifecycle Management (ALM) products for some time. Tell us a little bit about what you have in place, and then let's learn a bit more about the Asset Manager capabilities that you're pioneering?

Ansari
Ansari: We've been using HP tools for our testing area, such as the QTP Products Performance Center and Quality Center. We’ve recently went ahead with ALM 11.5, it has a lot of cross-project abilities. As for agile, we're now using HP Agile Manager.

This has helped us move forward fairly quickly into scaled agile using HP Agile Manager, while integrating with our current HP tools. We wanted to make sure that our tools were integrated and that we didn’t lose that traceability and the effectiveness of having a single vendor to get all our data.

HP Agile Manager is very important to us. It's a software-as-a-service (SaaS) model, and it was very easy for us to implement within our company. There was no concept of installing, and the response that we get from HP has been very fast, as this is the first experience we’ve had with a SaaS deliverable from HP.
It's very lightweight, it's web-based SaaS and it integrates with their current tool suite.

They're following agile, so we get releases every three months. Actually, every few weeks, we get enhancements for defects we may find within their product. It's worked out very well. It's very lightweight, it's web-based SaaS and it integrates with their current tool suite, which was vital to us.

We have between 500 and 1,000 individuals that make up development teams throughout United States. For Agile Manager, the last time we checked, it was approximately 400. We're hoping to get up to 1,000 by end of this year, so that way everyone is using Agile Manager for all their agile/scrum teams and their backlogs and development.
Gardner: Do you have any sense of how much faster you're able to develop? What are the paybacks in terms of quality, traceability, and tracking defects? What's the payback from doing this in the way you have?

Working together

Ansari: We’ve seen some, but I think the most is yet to come in rolling this out. One of the things that Agile Manager promotes is collaboration and working together in a scrum team. Agile Manager, having the software work all around the agile processes, makes it very easy for us to roll an agile methodology.

This has helped us collaborate better between testers and developers, and we're finding those defects earlier, before they even happen. We’ll have more hard metrics around this as we roll this out further. One of the major reasons we went with HP Agile Manager is that it has very good integration with the development tools we use.

They integrate with several development tools, allowing our testers to be able to see what changes occurred, what piece of code has changed for each defect enhancement that the tester would be testing. So that tight integration with other development tools was a very pivotal factor in our decision of going forward with that HP Agile Manager.

Gardner: So Rubina, not only are you progressing from waterfall to agile and adopting more up-to-date tools, but you’ve made this leap to a SaaS-based delivery for this. If that's working out well as you’ve said, do you think this is going to lead to doing more with other SaaS tools and tests and capabilities and maybe even look at cloud platform as a service opportunity?
We're also looking at offering some of our products in a SaaS model. So we realize what's involved in it.

Ansari: Absolutely. This was our first experience and it is going very well. Of course, there were some learning curves and some learning pains. Being able to get these changes so quickly and not having it do it ourselves was kind of a mind shift change for us. We're reaping the benefits from it obviously, but we did have to have a little more scheduled conversations, release notes, and documentation about changes from HP.

We're not new to SaaS. We're also looking at offering some of our products in a SaaS model. So we realize what's involved in it. It was great to be on the receiving end of a SaaS product, knowing that TriZetto themselves are playing that space as well.

There's always so much more to improve. What we’re looking for is how to quickly respond to our customers. That means also integrating HP Service Manager and any other tools that may be part of this software testing lifecycle or part of our ability to release or offer something to our clients.
We'll continue doing this until there is no more space for efficiency. But, there are always places where we can be even more effective.

The technologies that we’re advancing toward as well will allow us to easily go into the mobile space once we plan and do that.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, September 23, 2013

Navicure gains IT capacity optimization and performance monitoring using VMware vCenter Operations Manager

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

The next VMworld innovator interview focuses on how a fast-growing healthcare claims company is gaining better control and optimization across its IT infrastructure. Learn how IT leaders at Navicure have been deploying a comprehensive monitoring and operational management approach.

To understand how they're taming IT complexity as they set the stage to adopt the latest in cloud-computing and virtualization infrastructure developments, join Donald Wilkins, Director of Information Technology at Navicure Inc. in Duluth, Georgia.

The discussion, which took place at the recent 2013 VMworld Conference in San Francisco, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why is your organization so focused on taming complexity?

Wilkins
Wilkins: At Navicure, we've been focused on scaling a fast-growing business. And if you incorporate very complex infrastructure, it becomes more difficult to scale it. So we're focused on technologies that are simple to implement, yet have a lot of upward availability of growth from the storage, the infrastructure, and the software we use. We do that in order to be able to scale that growth we needed to satisfy our business objectives.

Gardner: Tell us a little bit about Navicure, what you do, how is that you're growing, and why that's putting a burden on your IT systems.

Wilkins: Navicure has been around for about 12 years. We started the company in about 2001 and delivered the product to our customers in the late 2001-2002 time-frame. We've been growing very fast. We're adding 20 to 30 employees every year, and we're up to about 230 employees today.

We have approximately 50,000 physicians on our system. We're growing at a rate of 8,000 to 10,000 physicians a year, and it’s a healthy growth. We don't want to grow too fast, so as not to water down our products and services, but at the same time, we want to grow at a pace that better enables us to deliver better products for our customers.

Revenue cycle management

Claim clearinghouses have been around for a couple of decades now. We've evolved from that claim-clearinghouse model to what we refer to as revenue cycle management. We pioneered that term early as we started the company.

We take the transactions from physicians and send them to the insurance companies. That’s what the clearinghouse model is. But on that product, we added a lot of value-added services, a lot analytics around those transactions to help the provider generate more revenue for their transactions. They get paid faster, and that they get paid the first time through the system.

It was very costly for transactions to be delayed weeks because of poorly submitted transactions to the insurance company or denials because they coded something wrong.

We try to catch all that, so that they get paid the first time through. That’s the return on investment (ROI) that our customers are looking for when they look at our products, to lower the AR days and to increase their revenue at the bottom line.

Customer service is one of the foundation cornerstones of our business. We feel that our customers are number one, and retaining those customers is one of our primary goals. 
We wanted to build a foundational structure that we can just build on as we get go into business and growing the transaction volume.

Gardner: Tell us a little bit about your IT environment.

Wilkins: The first thing we did at Navicure, when we started the company, is we looked at and decided that we didn't want to be in the data-center business. We wanted to use a colo that does that work at a much higher level than we could ever do. We wanted to focus on our product and let the colo focus on what they do.

They serve us from our infrastructure standpoint, and then we can focus on our products and build a good product. With that, we adopted very early on, the grid approach or the rack approach. This means that we wanted to build a foundational structure that we can just build on as we get go into business and grow the transactions volume.

That terminology has changed over the years, and that can be referred to a software-defined infrastructure today, but back then it was that we wanted to build infrastructure that would have a grid approach to it, so we could plug in more modules and components to add to scale out as we scale up.

With that, we continued to evolve what we do, but that inherent structure is still there. We need to be able to scale our business as our transactional volume doubles approximately every two years.

Gardner: And how did you begin your path to virtualization, and how did that progress into this more of a software-defined environment?

Ramping up fast

Wilkins: In the first few years of the operation of the company, we really had enough headroom in our infrastructure that it wasn't a big issue, but as we got four years into the company, we started realizing that we were going to hit a point where we would have to start ramping-up really fast.

Consolidation was not something that we had to worry about, because we didn’t have a lot to consolidate. It was a very early product, and we had to build the customer base. We had to build our reputation in the industry, and we did that. But then we started adding physicians by the thousands to our system every year.

With that, we started to have to add infrastructure. Virtualization came along at such a time that we could add it virtually faster and more efficiently than we could ever have if we added physical infrastructure.

So it became a product that we put in a test, dev, and production all at the same time, but it was something that just allowed us to meet the demands of the business.
We want to evolve that to be more proactive in our approach to monitoring.

Gardner: Of course, as many organizations have used virtualization to their benefit, they've also recognized that there is some complexity involved. And getting better management means further optimization, which further reduces costs. That also, of course, maintains their performance requirements. How did you then focus in on managing and optimizing this over time?

Wilkins: Well, one of the things we tried to look at, when we look at products and services, was to keep it simple. I have a very limited staff, and the staff needs to be able to drive to the point of whatever issue they're researching and/or inspecting.

As we've added technologies and services, we tried to add those that are very simple to scale, very, very simple to operate. We look at all these different tools to make that happen. This has led us to new products like VMware as they have also tried to drive to the same level, trying to simplify their product offering with their new products.

For years, we've been doing monitoring with other tools that were network-based monitoring tools. Those drive only so much value. They give us things like up-time alerting and responsiveness that are just about when issues happen. We want to evolve that to be more proactive in our approach to monitoring.
It’s not so much about how we can fix a problem when there is one. It’s more of, let’s keep the problem from happening to start with. That's where we've looked at some products for that. Recently we've actually implemented vCenter Operations Manager.

That product gives us a different twist that other SMNP monitoring tools do. It's a history of what's going on, but also a future analysis of that history and how it will change, based on our historical trends.

New line-up

Gardner: Of course, here at VMworld, we're hearing vSphere improvements and upgrades, but also the arrival of VMware vCloud Suite 5.5 and VMware vSphere with Operations Management 5.5. Is there anything in the new line-up that is particularly of interest to you, and have you had a chance to look at over?

Wilkins: I haven’t had a chance to look over the most recent offering, but we're running the current version. Again, for us, it's the efficiency mechanism inside the product that drives the most value for us to make sure that we can budget a year in advance of the expanding infrastructure that we need to have to meet the demands.

vCenter Operations Manager is key to understanding your infrastructure. If you don’t have it today, you're going to be very reactive to some of your pains and the troubles you're dealing with.

That product, while it does allow you to do a lot of research for various problems and services to drill down from the cluster level, down into the virtual machine levels and find out where your problems and pain points or, actually allows you to more quickly isolate the issue. At the same time, it allows you to project where you're growing and where you need to put your money into resources, whether that's more storage, compute resources, or network resources.

That's where we're seeing value out of the product, because it allows me to go during budget cycles to say that looking at infrastructure and our current growth, we will be out of resources by this time. We need to add this much, based on our current growth. Barring additional new products and services we may be coming up with, we may be adding to our service, if we don't do anything today. We're growing at this pace and here's the numbers to prove it.

When you have that information in front of you, you can actually build a business case around that that further educates the CFOs and the finance people to understanding what your troubles are and what you have to deal with on a day-to-day basis to operate the business.

Gardner: What sort of paybacks are there when you do this right?

Wilkins: Just being able to drive more density in our colo by being virtualized is a big value for us. Our footprint is relatively small. As for an actual dollar amount, it’s hard to pin something on there. We're growing so fast, we're trying to keep up with the demand, and we've been meeting that and exceeding that.
Desktop virtualization is going to be a critical component for that.

Really, the ROI is that our customers aren’t experiencing major troubles with our infrastructure not expanding fast enough. That's our goal, to drive high availability for infrastructure and low downtime, and we can do that with VMware and with their products and service.

We're a current customer of Site Recovery Manager. That's a staple in our virtual infrastructure and has been since 2008. We've been using that product for many years. It drives all of the planning and the testing of our virtual disaster recovery (DR) plan. I've been a very big proponent of that product and services for years, and we couldn’t do without it.

There are other products we will be looking at. Desktop virtualization is something that will be incorporated into the infrastructure in the next year or two.

As a small business, the value of that becomes a little harder to prove from a dollar standpoint. Some of those features like remote working come into play as office space continues to be expensive. It's something we will be looking at to expand our operations, especially as we have more remote employees working. Desktop virtualization is going to be a critical component for that.

Gardner: How about some 20/20 hindsight. If there were other folks that were ramping up on virtualization, or getting to the point where complexity was becoming an issue for them, do you have any thoughts on getting started or lessons learned that you could share?

Trusted partner

Wilkins: The best thing with virtualization is to get a trusted partner to help you get over the hurdle of the technical issues that may bring themselves to light.

I had a very trusted partner when I started this in 2005-2006. They actually just sat with me and worked with me, with no compensation whatsoever, to help work through virtualization. They made it such an easy value that it just became, "I've got to do this, because there's no way I can sustain this level of operational expense and of monitoring and managing this infrastructure, if it's all physical."

So, seeing that value proposition from a partner is key, but it has to be a trusted partner. It has to be a partner that has your best interest in mind, and not so much a new product to sell. It’s going to be somebody that brings a lot to the table, but, at the same time, helps you help yourself and lets you learn these products, so that you can actually implement it and research it on your own to see what value you can bring into the company.
It has to be a partner that has your best interest in mind, and not so much a new product to sell.

It’s easy for somebody to tell you how you can make your life better, but you have to actually see it, because then, you become a passionate person for the technology, and then you become a person that realizes you have to do this and will do whatever it takes to get this in here, because it will make your life easier.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in: