Monday, November 18, 2013

Enterprise Tablets: Choose a Device Nature That Best Supports Cloud Nurturing

Now that server hardware decisions are no-brainers (thanks to virtualization and the ubiquity of multi-core 64-bit x86), deciding on the enterprise-wide purchase of a tablet computer types will be the biggest hardware choice many IT leaders will make.

So what guides these tablet decisions? Do the attributes of the mobile device and platform (the nature of the thing) count most? Or is it more important that it conforms to the fast-changing needs of the back-end services and cloud ecosystem? Can the tablet be flexible and adaptive, to act really as many client types in one (the nurture)?
Can the tablet be flexible and adaptive, to act really as many client types in one (the nurture)?

Given how the requirements from enterprise to enterprise vary so much, this is a hugely complex issue. We've seen a rapidly maturing landscape of new means to the desired enterprise tablet ends in recent years: mobile device management (MDM), containerization and receiver technology flavors, native apps, web-centric apps, recasting virtual desktop infrastructure (VDI). It is still quite messy, really, despite the fact that this is a massive global market, the progeny of the PC market of the past 25 years.

Some think that bring your own device (BYOD) will work using these approaches on the user’s choice of tablet. If so, IT will be left supporting a dozen or more mobile client device types and/or versions. You and I know that can’t happen. The list of supported device types needs to be under six, preferably far less, whether it’s BYOD or quasi-BYOD.

Ticking time bomb

Yet enterprises must act. Users are buying and making favorites. Mobility is an imperative. These tablet hardware decisions must be made.

Think of it. You’re an IT leader at a competitive enterprise and rap, rap, rapping on your Windows to get in ASAP are BYOD, mobile apps dev, Android apps, iOS apps, and hybrid-cloud processes.

You have a lot to get done fast amid complex overlaps and interdependencies from your choices that could haunt you — or bless you — for years. And, of course, you have a tight budget as you fight to keep operating costs in check, even as scale requirements keeping rising.
Back-end strategy and procurement decisions count more than at any time in the last 12 years.

Somewhere in this 3D speed chess match against the future there are actual hardware RFPs. You will be buying client hardware for the still large (if not predominant) portion of the workforce that won’t be candidates for BYOD alone. And a sizable portion of these workers are going to need an enterprise tablet, perhaps for the first time. They want you to give it to them.

This cost-benefit analysis vortex is where I decided to break from my primary focus on enterprise software and data-center infrastructure to consider the implications of the mobile client hardware. My dearly held bias is that the back-end strategy and procurement decisions count more than at any time in the last 12 years.

Better not brick

But at the end of the network hops, there still needs to be a physical object, on which the user will get and put in the work that matters most. This object cannot, under any circumstances, become a weak link in the hard-won ecosystem of services that support, deliver, and gather the critical apps and data. This productivity symphony you are now conducting from amid your legacy, modern data center, and cloud/SaaS services must work on every level — right out to those greasy fingertips on the smart tablet glass.

Yes, the endpoint must be as good as the services stream behind them, yet not hugely better, not a holy shiny object that tends to diminish the rest, not just a pricey status symbol — but a workhorse that can be nurtured and that can adapt as demanded.
There still needs to be a physical object, on which the user will get and put in the work that matters most.

So I recently received and evaluated a Levono ThinkPad Tablet 2 running Windows 8 as well as an iPad Air running iOS 7. I wanted to get a sense of what the enterprise decisions will be like as enterprises seek the de facto standard mass-deployed tablet for their post-PC workforce. [Disclosure: Intel sent me, for free, a Lenovo ThinkPad as a trial, and I bought my own iPad Air. I do not do any business with Apple, Lenovo, or Intel.]

Let’s be clear, I’m a long-time Apple user by choice, but still run one instance of Windows 7 on a PC just in case there are Windows-only apps or games I need or want access to. This also keeps up my knowledge on Windows in general.

Good enough is plenty

Here’s what I found. I personally love the iPad Air, but the Lenovo ThinkPad Tablet 2 was surprisingly good, certainly good enough for enterprise uses. I will quibble with efficacy of the stylus, that the Google Chrome browser is better on it than Microsoft IE, that the downloads for both are a pain, and that battery life is a weakness on Lenovo — but these are not deal breakers and will almost certainly get better.

What’s key here is that the apps I wanted were easily accessed. There’s a store for that, regardless of the device. Netflix just runs. The cloud services and my data/profile/preferences were all gained quickly and easily. The synching across devices was quickly running. Never having used Windows 8, although familiar with Windows 7, was not an issue. I picked it up quickly, very quickly.
So the nature of the device is not the major factor, not a point of lock-in, or even a decision guide.

Any long-time Windows user, the predominant enterprises worker, will adapt to an Intel-powered Lenovo device running Windows quite well. And enterprise IT departments already know the strengths and weaknesses of Windows, be it 7 or 8, and they know they will have to pay Microsoft its use taxes for years to come in any event, given their dependence on Microsoft apps, servers, services and middleware.

But that same enterprise tablet user will graft well to an Android device, an iOS device (thanks to market penetration of iPod, iTunes and iPhone), or perhaps a Kindle Fire. Users will have their personal cloud affiliations and the services can be brught to any of these devices and platforms. It can be both a work and a personal device. Or you could easily carry two, especially if the company pays for one of them. As has been stated better elsewhere, these tablets are pretty much the same.

So the nature of the device is not the major factor, not a point of lock-in, or even a decision guide. Because of the single-sign-on APIs from cloud and social media providers, you can now go from tablet to tablet, find your cloud of choice — be it Google, Apple, Microsoft, Facebook, Yahoo, or Amazon. You know how you can just rent bicycles in many cities now and just ride it and drop it off? Same for everyone. This is the future of tablet devices too. Quite soon, actually. Rent it, log in, use it, move on.

Perhaps enterprises should just lease these things?

Enterprises must still choose

Which tablets then will connect back best to the enterprise? Will the business private cloud services be as easily assimilated as the public cloud ones? What of containerization support, isolation and security features, and/or apps receiver technology flavors? Apple’s iOS 7 goes a long way to help enterprises run their own identity and access management (IAM) and isolate apps and run a virtual private connection. Windows 8 has done this all along. Google and Amazon are happy to deliver cloud services just as well. There are the three or four flavors.

After using the Lenovo ThinkPad Tablet 2 running Windows 8, it astounds me that Microsoft lost this market and has to claw back from such low penetration in the mobile market. This should have been theirs by any reckoning. Years ago.

Now it’s too late for the device and client platform alone to dictate the market direction. It’s now a function of how the business cloud services can best co-exist with a personal device instance. Because this coexistence will be a must-have capability, it doesn’t really matter what the device is. Any of the top three or four will do.

The ability of the device to best nurture the business and the end-users -- both separate while equal in the same hardware -- that’s the ticket. The rest is standard feature check-offs.

You may also be interested in:

Wednesday, November 13, 2013

Cardlytics on HP Vertica powers millions of swiftly tailored marketing offers to bank card consumers

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Podcast Series delivers an innovation case study interview that highlights how data-intensive credit- and debit-card marketing services provider, Cardlytics, delivers millions of highly tailored marketing offers to banking consumers across the United States.

Cardlytics, in adopting a new analytics platform, gained huge data analysis capacity, vastly reduced query times, and swiftly met customer demands at massive scale.

To learn how, we sat down with Craig Snodgrass, Senior Vice President for Analytics and Product at Cardlytics Inc., based in Atlanta. The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: At some point, you must have had a data infrastructure or legacy setup that wasn't meeting your requirements. Tell us a little bit about the journey that you've been on gaining better analytic results for your business.

Snodgrass: As with any other company, our data was growing and growing and growing. Also growing at the same time was the number of advertisers that we were working with. Since our advertisers spanned multiple categories -- they range from automotive, to retail, to restaurants, to quick-serve -- the types of questions they were asking were different.

Snodgrass
So we had this intersection of more data and different questions happening at a vertical level. Using our existing platform, we just couldn't answer those questions in a timely manner, and we couldn't iterate around being able to give our advertisers even more insights, because it was just taking too long.

First, we weren’t able to even get answers. Then, when there was the back-and-forth of wanting to understand more or get more insight it just ended up taking longer-and-longer. So at the end of the day, it came down to multiple and unstructured questions, and we just couldn't get our old systems to respond fast enough.

Gardner: Who are your customers, and what do you do for them?

Growing the business

Snodgrass: Our customers are essentially anybody who wants to grow their business. That's probably a common answer, but they are advertisers. They're folks who are used to traditional media, where when they do a TV or radio ad. They're hitting everybody, people that were going to come to their store anyways and people who probably weren’t going to come to their store.

We're able to target who they want to bring into their store through looking at both debit-card and credit-card purchase data, all in an anonymized manner. We’re able to look at past spending behavior, and say, based on those spending behaviors, that these are the types of customers that are most likely to come to your store and more importantly, most likely to be a long-term customer for you.

We can target those, we can deliver the advertising in the form of a reward, meaning the customer actually gets something for the advertising experience. We deliver that through their bank.

The bank is able to do this for their customers as well. The reward comes from the bank, and the advertiser gets a new channel to go bring in business. Then, we can track for them over time what their return on ad-spend is. That’s not an advantage they’ve had before with the traditional advertising they’ve been doing.
It works inside of retail, just as well as restaurants, subscriptions, and the other categories that are out there as well.

Gardner: So it sounds like a win, win, win. As a consumer, I'm going to get offers that are something more than a blanket. It's going to be something targeted to me as the bank that’s providing the credit card. They're going to get loyalty by having a rewards effort that works. Then, of course, those people selling goods and services have a new way of reaching and marketing those goods and services in a way they can measure.

Snodgrass: Yeah, and back to this idea of the multiple verticals. It works inside of retail, just as well as restaurants, subscriptions, and the other categories that are out there as well. So it's not just a one-category type reward.

A customer will know quickly when something is not relevant. If you bring in a customer for whom it may not be relevant or they weren’t the right customer, they're not going to return.
The advertiser isn't going to get their return on ad-spend. So it's actually in both our interests to make sure we choose the right customers, because we want to get that return on ad-spend for the advertisers as well.

Gardner: Craig, what sort of volume of data are we talking about here?

Intersecting growth

Snodgrass: We're doing roughly 10 terabytes a year. From a volume standpoint, it's a combination of not just the number of transactions we're bringing in, but the number of requests, queries, and answers that we’re having to go against it. That intersection of growth in volume and growth in questions is happening at the same time.

For us right now, our data is structured. I know a lot of companies are working on the unstructured piece. We're in a world where in the payment systems and banking systems, the data is relatively structured and that's what we get, which is great. Our questions are unstructured. They're everywhere from corporate real estate types of questions, to loyalty, to just random questions that they've never known before.

One key thing that we can do for advertisers is, at a minimum, answer two large questions. What is my market share in an area? Typically, advertisers only know when customers come into their store with that transaction. They don't know where that customer goes and, obviously, they don't know when people don’t come into their store.

We have that full 360-degree view of what happens at the customer level, so we can answer, for a geographic area or whatever area that an advertiser wants, what is their market share and how is their market share trending week-to-week.

The other piece is that when we do targeting, there could be somebody that visits a location three times over a certain time period. You don't know if they're somebody who shops the category 30 times or if they only shop them three times. We can actually answer share-of-wallet for a customer, and you can use that in targeting, designing your campaigns, and more importantly, in analysis. What's going on with these customers?
For us, with Vertica, one of the key components isn't just the speed, but how quick we can scale if the number of queries goes up.

Gardner: So the better job you do, the more queries will be generated.

Snodgrass: It's a self-fulfilling prophesy. For us, with Vertica, one of the key components isn't just the speed, but how quick we can scale if the number of queries goes up. It's relatively easy to predict what our growth and data volume is going to be. It is not easy for me to predict what the growth in queries is going to be. Again, as advertisers understand what types of questions we can answer, it's unfortunately a ratio of 10 to 1. Once they understand something, there are 10 other questions that come out of it.

We can quickly add nodes and scalability to manage the increase in volumes of queries, and it's cheap. This is not expensive hardware that you have to put in. That is one of the main decision points we had. Most people understand HP Vertica on the speed piece, but that and the quick scalability of the infrastructure were critical for us.

Gardner: Just as your marketing customers want to be able to predict their spend and the return on investment (ROI) from it, do you sense that you can predict and appreciate, when you scale with HP Vertica what your costs will be? Is there a big question mark or do you have a sense of, I do this and I have to pay that?

Snodgrass: It is the "I do this and I'll have to pay that," the linearness. For those who understand Vertica, that’s a bit of a pun, but the linear relationship is that if we need to scale, all we need to do is this. It's very easy to forecast. I may not know the date for when I need to add something, but I definitely know what the cost will be when we need to add it.

Compare and contrast

Gardner: How do you measure, in addition to that predictability of cost, your benefits? Are there any speeds and feeds that you can share that compare and contrast and might help us better understand how well this works?

Snodgrass: There are two numbers. During the POC phase, we had a set of 10 to 15 different queries that we used as a baseline. We saw anywhere from 500x to 1,000x or 1,500x speed in return of getting that data. So that’s the first bullet point.

The second is that there were queries that we just couldn't get to finish. At some point, when you let it go long enough, you just don't know if it is going to converge. With Vertica, we haven't hit that limit yet.

Vertica has also allowed to have varying degrees of analysts’ capabilities when it comes to SQL writing. Some are elegant and they write fantastic, very efficient queries. Others are still learning the best way to go put the queries together. They will still always return with Vertica. In the legacy world prior to Vertica, those are the ones that just wouldn't return.
In a SaaS shop, there are a lot of things that you're going to do in SaaS that you are not going to go do in SQL

I don’t know the exact number for how much more productive they are, but the fact that their queries are always returning, and returning in a timely manner, obviously has dramatically increased their productivity. So it's a hard one to measure, but forget how fast the queries have returned, the productivity of our analyst has gone up dramatically.

Gardner: What could an analytics platform do better for you? What would you like to see coming down the pipeline in terms of features, function, and performance?

Snodgrass: If you could do something in SQL, Vertica is fantastic. We'd like more integration with R, more integration with software as a service (SaaS), more integration with these sophisticated tools. If you get all the data into their systems, maybe they can manipulate it in a certain way, but then, you are managing two systems.

Vertica is working on a little bit better integration with R through distributed R, but there's also SaaS as well. In a SaaS shop, there are a lot of things that you're going to do in SaaS that you are not going to go do in SQL. That next level of analytics integration is where we would love to go see the product go.

Gardner: Do you expect that there will be different types of data and information that you could bring to bear on this? Perhaps some sort of camera, sensor of some sort, point-of-sale information, or mobile and geospatial information that could be brought to bear? How important is it for you to have a platform that can accommodate seemingly almost any number of different information types and formats?

Snodgrass: The best way to answer that one is that we don't ever want to tell business development that the reason they can't pursue a path is because we don't have a platform that can support that.

Different paths

Today, I don't know where the future holds from these different paths, but there are so many different paths we can go down. It's not just the Vertica component, but the HP HAVEn components and the fact that they can integrate with a lot of the unstructured, I think they call it “the human data versus the machine data.”

It's having the human data pathway open to us. We don't want to be the limiting factor for why somebody would want to do something. That's another bullet point for HP Vertica in our camp. If a business model comes out, we can support it.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, November 6, 2013

Efficient big data capabilities help Cerner drive needed improvements into healthcare outcomes

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Podcast Series delves into how a healthcare solutions provider leverages big-data capabilities. We’ll see how Cerner has deployed the HP Vertica Analytics platform to help their customers better understand healthcare trends, as well as to help them better run their own systems.

To learn more about how high-performing and cost-effective big data processing forms a foundational element to improving healthcare quality and efficiency, join Dan Woicke, Director of Enterprise Systems Management at Cerner Corp. based in Kansas City, Missouri.

The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: We're going through some major transitions in how healthcare payments are going to be made -- and how good care is defined. We're moving from pay for procedures to more pay for outcomes. So tell me about Cerner, and why big data is such a big deal.

Woicke: The key element here is that the payment structure is changing to more of an outcome model. In order for that to happen, we need to get all the sources of data from many, many disparate systems, bring them in, and let our analysts work on what the right trends are and predict quality outcomes, so that you can repeat those and stay profitable in the new system.

My direct responsibility is to bring in massive amounts of performance data. This is how our Cerner Millennium systems are running.
We have hundreds of clients, both in the data center and those that manage their own systems with their own database administrators (DBAs). The challenge is just to have a huge system like that running with tens of thousands of clinicians on the system.

We need to make sure that we have the right data in place in order to measure how systems are running and then be able to predict how those systems will run in the future. If things are happening that might be going negative, how can we take the massive amounts of data that are coming into our new analytical platform, correlate those parameters, predict what’s going to happen, and then take action before there is a negative?

Effect change

We want to be able to predict what’s happening, so that we can effect change before there is a negative impact on the system.

Gardner: How does big data and the ability to manage big data get you closer to the real-time and then, ultimately, proactive results your clients need?

Woicke: Since January we've begun to bring in what we call Response Time Measurement System (RTMS) records. For example, when a doctor or a nurse is in our electronic medical record (EMR) system is signing an order, I can tell you how long it took to log into the system. I can tell you how long you were in the charting module.

Woicke
All those transactions produce 10 billion timers, per month, across all of our clients. We bring those all into our HP Vertica Data Warehouse. Right now, it’s about a two-hour response time, but my goal, within the next 12 months, is to get it down to 10 minutes.

I can see in real time when trends are happening, either positive or negative, and be able to take action before there is an issue.

Gardner: Tell us more about about Cerner -- what you do in IT.

Woicke: We run the largest EMR in the world. We have well over 400 domains to manage  -- we call them domains -- which allows us to hook up multiple facilities to those domains. Once we have multiple facilities connecting into those domains, at any given time, there are tens of thousands clinicians on the system at one time.

We have two data centers in Kansas City, Missouri and we host more than half for our clients in those data centers. The trend is moving toward being remote-hosted managed like that. We still have a couple of hundred clients that are managing their own Millennium domains. As I said before, we need to make sure that we provide the same quality of service to both those sets of clients.

Single database

Cerner Millennium is a suite of products or solutions. Millennium is a platform where the EMR is placed into a single database. Then, we have about 55 different solutions that go on top of that platform, starting with ambulatory solutions. This year was really neat. We were able to launch our first ambulatory iPad application.

There are about 55 different solutions, and it's growing all the time with surgery and lab that fit into the Cerner Millennium system. So we do have a cohesive set of data all within one database, which makes us unique.

Gardner: Where does the data come from primarily, and how much data we are talking about?

Woicke: We're talking about quite a bit of data, and that’s why we had to transform something away from a traditional OLTP database into an MPP type database, because those systems that are now sending data to Cerner. 

We have claims data, and HL7 messages. We're going to get all our continuous care records from Millenium. We have other EMRs. So that’s pretty much the first time that we're bringing in other EMR records.

You’ll have that claim data that comes in from multiple sources, multiple EMRs, but the whole goal of population health is to get a population to manage their own health. That means that we need to give them the tools in their hands. And they need to be accurate, so that they can make the right decisions in the future. What that's going to do is bring the total cost of your healthcare down, which is really the goal.
What that's going to do is bring the total cost of your healthcare down, which is really the goal.

We have health-plan enrollments, and then of course, within Millennium, we're going to drill down into outcomes, re-admissions, diagnosis, and allergies. That’s the data that we need to be able to predict what kind of care we are going to have in the future.

Gardner: So it seems to me that we talk about "Internet of things." We're also going to the "Internet of people." More information from them about their health comes back and benefits you and benefits the healthcare providers. But ultimately, they can also provide great insights to the patients themselves.

Do you see, in the not too distant future, applications where certain data -- well-protected and governed of course -- is made into services and insights that allow for a better proactive approach to health?

Proactive approach

Woicke: Without a doubt. We're actually endorsing this internally within the company by launching our own weight-loss challenges, where we're taking our medical records and putting them on the web, so that we have access to them from home.

I can go on the site right now and manage my own health. I can track the number of steps I'm doing. Those are the types of tools that we need to launch to the population, so that they endorse that good behavior, which will ultimately change their quality of life.

Right now, we're in production with the operation side that we talked about a little bit about earlier. Then, we are in production with what we call Health Facts, a huge set of blinded data. We hire a team of analysts and scientists to go through this data and look for trends.
You can see what that’s going to do for the speed of the amount of analysis we could do on the same amount of data. It’s game changing.

It’s something we haven’t been able to do until recently, until we got HP Vertica. I am going to give you a good example. We had analysts log a SQL query to do an exploratory type of analysis on the data. They would log that at 5 p.m., then issue it, and hopefully, by the time they came back at 8 a.m. the next day, that query would be done.

In Vertica, we've timed those queries at between two and five seconds. So you can see what that’s going to do for the speed of the amount of analysis we could do on the same amount of data. It’s game changing.

There were a lot of competitors that would have worked out, but we had a set of criteria that we drilled down on. We were trying to make it as scientific as possible and very, very thorough. So we built a score sheet, and each of us from the operation side and Health Facts side graded and weighted each of those categories that we were going to judge during the proof of concept (POC). We ended up doing six POCs.
We got down to two, and it was a hard choice. But with the throughput that we got from Vertica, their performance, and the number of simultaneous users on the system at a given period of time, it was the right choice for us.

Gardner: And because we're talking about healthcare, costs are super important. Was there a return on investment (ROI) or cost benefit involved as well?

Extremely competitive

Woicke: Absolutely. You could imagine that this would be the one or two top categories weighted on our score sheet, but certainly HP Vertica is extremely competitive, compared to some of the others that we looked at.

Gardner: Dan, looking to the future, what do you expect your requirements to be, say, two years from now? Is there a trajectory that you need to take as an organization, and how does that compare to where you see Vertica going?

Woicke: Having Vertica as a partner, we navigate that together. They invited me here to Boston to sit on the user board. It was really neat to sit right there with [HP Vertica General Manager] Colin Mahony at the same table and be able to say, "This is what we need. These are our needs coming around the corner," and have him listen and be able to take action on that. That was pretty impressive.

To answer your question though, it’s more and more data. I was describing the operations side, where we bring in 10 billion RTMS records. There's going to be another 10 billion type of records coming in from other sources, CPU, Memory, Disk I/O, everything can be measured.

We want to bring it into Vertica, because I'm going to be able to do some correlation against something we were talking about. If I know that the RTMS records show a negative performance that's going to happen within the next 10-15 minutes, I can figure out which one of those operational parameters is most affecting that outcome of that performance, and then can send the analyst directly in to mitigate that problem.
By bringing in more and more data and being able to correlate it, we're going to show all the clients, as well as the providers, how their system is doing.

On the EMR side, it’s more data as well. On the operations side, we're going to apply this to other enterprises to bring in more data to connect to the experts. So there is always somebody out there. That’s the expert. What we're going to do is connect the provider with the payers and the patient to complete that triangle in population health. That’s where we're going in the next few months.

Gardner: I certainly think that managing data effectively is a huge component of our healthcare challenge here in the United States, and of course, you're operating in about 19 countries. So this is something that will be a benefit to almost any market where efficiency, productivity, quality of care come to bear.

Woicke: At Cerner Corp., we're really big on transparency. We have a system right now called the Lights On Network, where we are taking these parameters and bringing them into a website. We show everything to the client, how they're performing and how the system is doing. By bringing in more and more data and being able to correlate it, we're going to show all the clients, as well as the providers, how their system is doing.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, November 4, 2013

Different paths to cloud and SaaS enablement yield similar major benefits for Press Ganey and Planview

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

The next VMworld innovator panel discussion focuses on how two companies are using aggressive cloud-computing strategies to deliver applications better to their end users.

We'll hear how healthcare patient-experience improvement provider Press Ganey and project and portfolio management provider Planview are both exploiting cloud efficiencies and agility. Their paths to the efficiency of cloud have been different, but the outcomes speak volumes for how cloud transforms businesses.

To understand how, we sat down with Greg Ericson, Senior Vice President and Chief Innovation Officer at Press Ganey Associates in South Bend, Indiana, and Patrick Tickle, Executive Vice President of Products at Planview Inc. in Austin, Texas.

The discussion, which took place at the recent 2013 VMworld Conference in San Francisco, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: We heard a lot about cloud computing at VMworld, and you're both going at it a little differently. Greg, tell us a bit about the type of cloud approach you’re taking at Press Ganey.

Ericson: Press Ganey is the leader in a patient-experience analytics. We focus on providing deep insight into the patient experience in healthcare settings. We have more than 10,000 customers within the healthcare environment that look to us and partner with us around patient-experience improvement within the healthcare setting.

Ericson
We started this cloud  journey in July of 2012 and we set out to achieve multiple goals. Number one, we wanted to position Press Ganey's software as solution products of the next generation and have a platform that was able to support them. 

We went through a journey of consolidating multiple data centers. We consolidated 14 different storage arrays in our process and, most importantly, we were able to position our analytic solutions to be able to take on exponentially more data and provide that to our clients.

Gardner: Patrick, how has cloud helped you at Planview? You were, at one time, a fully a non-cloud organization. Tell us about your journey.

Tickle: Planview has been an enterprise software vendor, a classic best-of-breed focused enterprise software vendor, in this project and portfolio and resource management space for over 20 years.

Tickle
We have a big global customer base of on-premise customers that built up over the last 23 years. Obviously, in the world of software these days, there's a fairly seismic big shift about being in software as a service (SaaS) and how you get to the cloud, the business models, and all those kinds of things.

Conventional wisdom is for a lot it was that you can't get there unless you start from scratch. Obviously, because this is the only thing we do, it was pretty imperative that we figure out a way to get there.

So two or three years ago, we started trying to make the transition. There were a lot of things we had to go through, not just from an infrastructure standpoint, but from a business model and delivery standpoint, etc.

The essence was here. We didn’t have time to rewrite a code base in which we've invested 10-plus years and hundreds of thousands of hours of customer experience to be a market-leading product in our space. It could take five years to rewrite it. Compared to where we were 10 years ago, when you and I first met, there are a lot more tools in the bag for people to get to the cloud that there were then.

So we really went after VMware and did the research sweep much more aggressively. We started out with our own kind of infrastructure that we bolted together and moved to a FlexPod in our second generation.

We have vCloud Hybrid Services now, and leveraging our existing code base, and then the whole suite of VMware products and services, we have transformed the company into a cloud provider. Today, 90 percent of all our new Planview customers are SaaS customers. It's been a big transition for us, but the technology from VMware has been right in the center of making it happen.

Business challenges

Gardner: Greg, tell us a little bit about some of the business challenges that are driving your IT requirements that, in turn, make the cloud model attractive. Is this a growth issue? Is this a complexity issue? What are your business imperatives that make your IT requirements?

Ericson: That’s a great question. Press Ganey is a 25-year-old organization. We pioneered the concept of patient experience and the analytics, and insight into the patient experience, within the healthcare setting. We have an organization that's steeped in history, and so there are multiple things that we're looking at.

Number one, we have one of the largest protected health information (PHI) databases in the United States. So we felt that we had to have a very secure and robust solution to provide to our clients, because they trust us with their data.

Number two, with the healthcare reform, the focus on patient experience is somewhat mandatory, whereas before, it was somewhat voluntary. Now, it's regulated or it's part of the healthcare reform. When you look at organizations, some were actually coming to us and saying, "We want to get however many patient surveys out that we need to satisfy our threshold."
Our scientists are also finding a correlation between the patient experience results and clinical and quality outcomes.

Our philosophy is why would you want to do that? We believe that if you can understand and leverage the different media to be able to fill that out, you can survey your entire population of patients that are coming into not only your institution but, in the accountable care organization, the entire ecosystem that you’re serving. That gives you tremendous insight into what's going on with those patients.

Our scientists are also finding a correlation between the patient experience results and clinical and quality outcomes. So, as we can tie those data sets together in those episodic events, we're finding very interesting kinds of new thought, leading thought, out there for our clients to look at.

So for us, going from minimally surveying your population to doing census survey, which is your entire population, represents an exponential growth. The last thing is that, for our future, in terms of going after some of those new analytics, some of the new insight that we want to provide our clients, we want to position the technology to be able to take us there.

We believe that the VMware vCloud Suite represents a completeness of vision. It represents a complete a single pane of glass into managing the enterprise and, longer-term, as we become more sophisticated in identifying our data and as the industry matures, we think that a public cloud, a hybrid cloud, is in the future for us, and we're preparing for that.

Gardner: And this must be a challenge for you, not only in terms of supporting the applications, but also those data sets. You're getting some larger data sets and they could be distributed. So the cloud model suits your data needs over time as well?

Deeper insights

Ericson: Absolutely. It gives us the opportunity to be able to apply technology in the most cost-value proposition for the solutions that we’re serving up for our customers.

Our current environment is around 600 server instances. We have about 300 terabytes (TB) running in 20 SaaS applications, and we're growing exponentially each month, as we continue to provide that deeper insight for our customers.

Gardner: Patrick, for your organization what are some of the business drivers that then translate into IT requirements?

Tickle: From an IT perspective, it changed the culture of the company, moving from being a on-premise perpetual kind of "ship the software and have a customer care organization that focuses on bug and break-fix" to a service-delivery model. There were a lot of things that rippled through that whole thing.
We had to move from an IT culture to an OPs culture and all the things that go along with that, performance and up time.

At the end of the day, we had to move from an IT culture to an operations culture and all the things that go along with that, performance and up-time. Our customer base is global. So it was being able to provide that around the globe is. All those things were pretty significant shifts from an IT perspective.

We went from a company that had a corporate IT group to a company that has a hosting and DevOps and Ops team that has a little bit of spend in corporate IT.

Out of the gate, the first step at Planview was moving to colo. SunGard has been a great partner for us over the last couple of years as our ping, power, and pipe. Then, in our first generation, we bolted together some of our storage and computer infrastructure because it wasn’t quite all the way there. Then, in our most recent incarnation of the infrastructure we’re using FlexPods at SunGard in Austin, Texas and London.
OPEX spend

We're always having to evaluate future footprints. But ultimately, like many companies, we would like to convert that infrastructure investment from a capital spend into an OPEX spend. And that’s what’s compelling with vCloud Hybrid Service.

What we've been excited about hearing from VMware is not just providing the performance and the scalability, but the compatibility and the economic model that says we’re building this for people who want to just move virtual machines (VMs). We understand how big the opportunity is, and that’s going to open up more of a public cloud opportunity for us to evaluate for a wide variety of use cases going forward.

Gardner: How big a deal is it when we can, with just a click of a mouse, move workloads to any support environment we want?

Tickle: It's a huge deal. Whether it’s a production environment or disaster recovery (DR) environment, at the end of the day it's a big deal for both of us. For a SaaS company the only matter is renewals. It’s happy customers that renew. That transition from perpetual-plus maintenance to a renewal model, where you're on the customer service watch at another level, and it's every minute of every day.

Everything that we can do to make the customer experience, not just from our UI and our software, but obviously the delivery of the service, as compelling as possible, allows us to run our business. That can be a disaster scenario or just great performance across our geography where we have customers and then to do that in a cost effective way that operates inside our business model, our profit and loss.

So our shareholders are equally pleased with their turn off. We can't afford to have half of the company’s OPEX go into IT, while we’re trying to make customers as successful as they possibly can. We continue to be encouraged that we’re on a great path with the stack that we're seeing to get there.

Gardner: I think it's fair to say that cloud is not just repaving old cow paths, that cloud is really transforming your entire business. Do you agree, Greg?

Rejuvenate legacy

Ericson: I agree. It allows us, especially an organization that’s 25 years steeped in history, to be able to rejuvenate our legacy applications and be able to deliver those with maximum speed, maximizing our resources, and delivering them in a secure environment. But it also allows us to be able to grow, to flex, and to be able to rejuvenate and organically transform the organization. It's pretty exciting for us and it adds a lot of value to our clients indirectly.

Gardner: Greg,what are some of the more measurable pay-offs when you go to cloud? Are these soft payoffs of productivity and automation or are there hard numbers about return on investment (ROI) or moving more to a operation cost versus capital cost? What do you get when you do cloud right?

Ericson: We justify the investment based on consolidation of our data centers, consolidation and retirement of our storage arrays, and so on. That’s from a hard-savings perspective. From a soft-savings perspective, clearly in an environment that was not virtualized, virtualizing the environment represented a significant cost avoidance.
Our focus is on a complete solution that allows us to really focus in on what's important for us, what's important for our clients.

Longer-term, we're looking at how to position the organization with a robust, virtual secured infrastructure that runs with a minimum amount of technical resources, so that we can focus most of our efforts on delivering innovative applications to our clients.

The biggest opportunity for us is to focus there. As you look at the size of the data set and the growth of those data sets, positioning infrastructure to be able to stay with you is exciting for us and it’s a value proposition for our clients.

Entire environment

With a minimum amount of staff, we were able to move in nine months and virtualize our entire environment. When you talk about 600 servers and 300 TB of data, that's a pretty sizable enterprise and we're fully leveraging the vCloud Suite.

Our network is virtualized, our storage is virtualized, and our servers are virtualized. The release of vCloud Suite 5.5 and some of the additional network functionality and storage functionality that’s coming out with that is rather exciting. I think it's going to continue to add more value to our proposition.

Gardner: Some people say that a single point of management, when you have that comprehensive suite approach, comes in pretty handy, too.

Ericson: It does, because it gives you the capability of managing through a single pane of glass across your environments. I was going to accentuate that we’re about 50 percent complete in building on our catalog.

For our next steps, number one is that we’re looking at building upon the excellence of Press Ganey and building our next-generation enterprise data warehouse. We’re looking at leveraging from a DevOps perspective the VMware vCloud Suite, and we already have some pilots that are up and running. We'll continue to build that out.
Not only are we maximizing our assets in delivering a secure environment for our clients, but we're also really working toward what I call engineering to zero.

As we deploy, not only are we maximizing our assets in delivering a secure environment for our clients, but we're also really working toward what I call engineering to zero. We’re completely automating and virtualizing those deployments and we're able to move those deployments, as we go from dev to test, and test to user acceptance testing, and then into a production environment.

Tickle: As we all know, there are lot of hypervisors out there. We can all get that technology from a wide variety of sources. But to your question about the value with the stack, that’s what's we look at and again. What's important now is not just the product stack, but the services stack.

We look at a company like VMware and say, "Site Recovery Manager in conjunction with vCloud Hybrid Services brings a DR solution to me as SaaS vendor and that fits with my architecture and brings that service stack plus."

There's no comparing another hypervisor vendor to build out that stack of service. Again, we could probably talk about probably numerous, but that’s when I listen to the things that go on at the event and get to spend time with the people at VMware. That whole value stack that VMware is investing in is what looks so much more compelling than just picking pieces of technology.

Gardner: Looking to the future, Greg, based on what you've heard at VMworld about the general availability of vCloud Hybrid Services and the upgrade to the suite of private cloud support, what has you most excited? Was there something that surprised you? What is in the future road map for you?

A step further

Ericson: A couple of different things. The next release of NSX is exciting for us. It allows us to be able to take the virtualization of our network a step further. Also to be able to connect hypervisors into a hybrid-cloud situation is something that, as we evolve our maturity in terms of managing our data, is going to be exciting for us.
One of the areas that we're still teasing out and want to explore is how to tie in that accelerator for a big-data application into that. Probably, in 2014, what we're looking at is how to take this environment and really move from a DR kind of environment to a high-availability environment. I believe that we’re architected for that and because of the virtualization we can do that with a minimum amount of investment.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Wednesday, October 30, 2013

Learn how Visible Measures tracks an expanding universe of video and viewer use big data

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series examines how video advertising solutions provider Visible Measures delivers impactful metrics on video use and patterns.

Visible Measures measures via a massive analytics capability an ocean of video at some of the highest scales I've ever heard of. By creating very deep census data of everything that's happened in the video space, Visible Measures uses unique statistical processes to figure out exactly what patterns emerge within video usage at high speed and massive scale and granularity.
 
To learn more about how Visible Measures measures, please welcome Chris Meisl, Chief Technology Officer at Visible Measures Corp., based in Boston.

The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Tell us a little bit about video metrics. It seems that this is pretty straightforward, isn't it? You just measure the number of downloads and you know how many people are watching a video -- or is there more to it?

Meisl: You'd think it would be that straight-forward. Video is probably the fastest growing component of the Internet right now. Video consumption is accelerating unbelievably. When you measure a video, not only you are looking at did someone view the video but how far they are into the video. Did they rewind it, stop it, or replay certain parts? What happened at the end? Did they share it?

Meisl
There are all kinds of events that can happen around a video. It's not like in the display advertising business, where you have an impression and you have a click. With video, you have all kinds of interactions that happen.

You can really measure engagement in terms of how much people have actually watched the video, and how they've interacted with a video while it's playing.

Gardner: This is an additional level of insight beyond what happened traditionally with television, where you need a Nielsen box or some other crude, if I could use that term, way of measuring. This is much more granular and precise.

Census based

Meisl: Exactly. The cable industry tried to do this on various occasions with various set-up boxes that would "phone home" with various information. But for the most part, like Nielsen, it's panel-based. On the Internet, you can be more census-based. You can measure every single video, which we do. So we now know about over half a billion videos and we've measured over three trillion video events.

Because you have this very deep census data of everything that's happened, you can use standard and interesting statistical processes to figure out exactly what's happening in that space, without having to extend a relatively small panel. You know what everyone is doing.

Gardner: And of course, this extends not only to programming or entertainment level of video, but also to the advertising videos that would be embedded or precede or follow from those. Right?

Meisl: Exactly. Advertising and video are interesting, because it's not just standard television-style advertising. In standard television advertising, there are 30-second spots that are translated into the Internet space as pre-roll, post-roll, mid-roll, or what have you. You're watching the content that you really want to watch, and then you get interrupted by these ads. This is something that we at Visible Measures didn't like very much.

We're promoting this idea of content marketing through video, and content marketing is a very well-established area. We're trying to encourage brands to use those kinds of techniques using the video medium.
The first part that you have to do is have a really comprehensive understanding of what's going on in the video space.

That means that brands will tell more extensive stories in maybe three- to five-minute video segments -- that might be episodic -- and we then deliver that across thousands of publishers, measure the engagement, measure the brand-lift, and measure how well those kinds of video-storytelling features really help the brand to build up the trust that they want with their customers in order to get the premium pricing that that brand has over something much more generic.

Gardner: Of course, the key word there was "measures." In order to measure, you have to capture, store, and analyze. Tell us a little bit about the challenges that you faced in doing that at this scale with this level of requirements. It sounds as if even the real-time elements of being able to feed back that information to the ad servers is important, too.

Meisl: Right. The first part that you have to do is have a really comprehensive understanding of what's going on in the video space.

Visible Measure started with measuring all video that’s out there. Everywhere we can, we work with publishers to instrument their video players so that we get signals while people are watching videos on their site.

For the publishers that don't want to allow us to instrument their players, then we can use more traditional Google spidering techniques to capture information on the view count, comment count, and things like that. We do that on a regular basis, a few times a day or at least once a day, and then we can build up metrics on how the video is growing on those sites.

Massive database

So we ended up building this massive database of video -- and we would provide information, or rather insight, based on that data, to advertisers on how well their campaigns were performing.

Eventually, advertisers started to ask us to just deliver the campaign itself, instead of giving just the insight that they would then have to try to convince various other ad platforms to use in order to get a more effective campaign. So we started to shift a couple of years ago into actual campaign delivery.

Now, we have to do more of a real-time analysis, because as you mentioned, you want to, in real time, figure out the best ways to target the best sites to send that video to, and the best way to tune that campaign in order to get the best performance for the brand.

Gardner: And so faced with these requirements, I assume you did some proofs of concept (POCs). You looked around the marketplace for what’s available and you’ve come up with some infrastructure that is so far meeting your needs.

Meisl: Yes. We started with Hadoop, because we had to build this massive database of video, and we would then aggregate the information in Hadoop and pour that into MySQL.
There are all kinds of possibilities that you can only do if you have access to the data as soon as it was generated.

We quickly got to the point where it would take us so long to load all that information into MySQL that we were just running out of hours in the day. It took us 11 hours to load MySQL. We couldn’t actually use the MySQL. It was a sharded MySQL cluster. We couldn’t actually use it while it was being loaded. So you’d have to have two banks of it.

You only have a 12-hour window. Otherwise, you’ve blown your day. That's when we started looking around for alternate solutions for storing this information and making it available to our customers. We elected to use HP Vertica -- this was about four years ago -- because that same 11-hour load took two hours in Vertica. And we're not going to run out of money buying hard drives, because they compress it. They have impressive compression.

Now, as we move more into the campaign delivery for the brands that we represent, we have to do our measurement in real-time. We use Storm, which is a real-time stream processing platform and that writes to Vertica as the events happen.

So we can ask questions of Vertica as they happen. That allows our ad service, for example, to have much more intelligence about what's going on with campaigns that are in-flight. It allows us to do much more sophisticated fraud detection. There are all kinds of possibilities that you can only do if you have access to the data as soon as it was generated.

Gardner: Clearly if a load takes 11 hours, you're well into the definition of big data. But I'm curious, for you, what constitutes big data? Where does big data begin from medium or non-big data?

Several dimensions

Meisl: There are several dimensions to big data. Obviously, there's the size of it. We process what we receive, maybe half a billion events per day, and we might peak at near a million events a minute. There is quite a bit of lunchtime video viewing in America, but typically in the evening, there is a lot more.

The other aspect of big data is the nature of what's in that data, the unstructured nature, the complexity of it, the unexpectedness of the data. You don't know exactly what you're going to get ahead of time.

For information that’s coming from our instrumented players, we know what that’s going to be, because we wrote the code to make that. But we receive feeds from all kinds of social networks. We know about every video that's ever mentioned on Twitter, videos that are mentioned on Facebook, and other social arenas.

All of that's coming in via all kinds of different formats. It would be very expensive for us to have to fully understand those formats, build schemas for them, and structure it just right.

So we have an open-ended system that goes into Hadoop and can process that in an open-ended way. So to me, big data is really its volume plus the very open-ended, unknown payloads in that data.
We're continuously looking at how well we optimize delivery of campaigns and we're continuously improving that.

Gardner: How do you know you're succeeding here? Clearly, going from 11 hours to two hours is one metric. Are there other metrics of success that you look to -- they could be economic, performance, or concurrent query volumes?

Tell me what you define as a successful analytics platform.

Meisl: At the highest level, it's going to be about revenue and margin. But in order to achieve the revenue and margin goals that we have, obviously we need to have very efficient processes for doing the campaign delivery and the measurement that we do.

As a measurement company, we measure ourselves and watch how long it takes to generate the reports that we need, or for how responsive we are to our customers for any kind of ad-hoc queries that they want or special custom reports that they want.

We're continuously looking at how well we optimize delivery of campaigns and we're continuously improving that. We have corporate goals to improve our optimization quarter-over-quarter.

In order to do that, you have to keep coming up with new things to measure and new ways to interpret the data, so you can figure out exactly which video you want to deliver to the right person, at the right time, in the right context.

Looking down the road

Gardner: Chris, we're here at the Big Data Conference for HP Vertica and its community. Looking down the road a bit, what sort of requirements do you think you are going to need later? Are there milestones or is there a road map that you would like to see Vertica and HP follow in order to make sure that you don't run out of runaway again sometime?

Meisl: Obviously, we want HP and Vertica to continue to scale up, so that it is still a cost-effective solution as the volume of data will inexorably rise. It's just going to get bigger and bigger and bigger. There's no going back there.

In order to be able to do the kind of processing that we need to do without having to spend a fortune on server farms, we want Vertica, in particular, to be very efficient at the kinds of queries that it needs to do and proficient at loading the data and of accommodating asking questions of it.
In order to be able to do the kind of processing that we need to do without having to spend a fortune on server farms, we would want Vertica.

In addition to that, what's particularly interesting about Vertica is its analytic functions. It has a very interesting suite of analytic functions that extends beyond the normal standard SQL analytic functions based on time series and pattern matching. This is very important to us, because we do fraud detection, for example. So you want to do pattern matching on that. We do pacing for campaigns, so you want to do time series analysis for that.

We look forward to HP and Vertica really pushing forward on new analytic capabilities that can be applied to real-time data as it flows into the Vertica platform.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: