Monday, September 8, 2014

GSN Games hits top prize using big data to uncover deep insights into gamer preferences

It's a shame when the data analysis providers inside a company get the cold shoulder from the business leaders because the data keeps proving the status quo wrong, or contradicts the conventional corporate wisdom.

Fortunately for GSN Games in San Francisco, there's no such culture clash there. "The real thing that's helped us get to the point we are is a culture where everybody is open to being wrong -- and open to being proven wrong by the data," says Portman Wills, Vice President of Data at GSN Games.

"One of the things we use data for is to challenge all of our assumptions about our own products and our own businesses, says Wills. "It's really gotten to a point where it's almost religious in our company. The moment two people start debating what should or shouldn't happen, they say, 'Well let's just let the data decide.' That's been a core change not just for us, but for the game industry as a whole."

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

How did GSN Games get to the point where the data usually wins? It took a blazing fast data warehouse of 1.3 trillion rows that consumes, stores and produces analysis from some 110 million registered game-players in near real time. The next BriefingsDirect podcast focuses on just how GSN Games exploits such big data to effectively uncover game-changing entertainment trends for their audience. Oh, and it changes corporate cultures, too.

The discussion, at the recent HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Wills
Wills: GSN started as a cable network in the U.S. We’re distributed in 80 million households as the Game Show Network, and then we also have a digital wing that produces casual and social games on Facebook, web, tablets, and mobile. That division has 110 million registered game-players. My team takes data from all over those worlds, throws them into a big data warehouse, and starts trying to find trends and insights for both our TV audience and our online game-players.

In terms of the games, which is really where the growth is, our core demographic is older females, believe it or not, who love playing casual games. We skew more in the 55-plus age range, and we have players from all over the world.

Gardner: The word “games” means a lot of different things to a lot of people. We’re talking about a heritage of network television games back in the ’60s and ’70s that have led us to what is now your organization. But what sort of newer games are we talking about, and what proportion of them are online games, versus more of the passive watching like that on a cable or other media outlets?

Wills: Originally, when our games division started as a branch of GSN, it was companion games to Wheel of Fortune, Minute to Win It, whatever the hot game show was. That's still a part of it, but the growth in the last few years has been in social games on Facebook, where a lot of our games are more casual titles and have nothing to do with the game shows -- tile-matching games or solitaire games, for example.
In the last year or year-and-a-half for us, like everyone else, there’s been this explosion in mobile.

Then, in the last year or year-and-a-half for us, like everyone else, there’s been this explosion in mobile. So it’s iPad, Android, and iPhone games, and there we have the solitaires and the tile matching, too.

Increasingly, a lot of our success and growth has come from virtual casino games. People are playing Bingo, video poker, even slots, virtual slots. We have this title called GSN Casino. That’s an umbrella app with a lot of mini games that are casino-themed, and that one has really just exploded really in the last six months. It's a long way from the Point A of Family Feud reruns to the Point Z of virtual slot machines, but hopefully you can see how we got there.

Gardner: It seems like a long distance, but it’s been also a fairly short amount of time. It wasn't that long ago that the information you might have in your audience came through Nielsen for passive audiences, and you had basically a one- or two-dimension view of that individual, based on the estimate of what time was devoted to a show. But now, with the mobile devices in particular, you have a plethora of data.

Tell us about the types of data that you can get, and what volumes are we talking about.

Mobile experience

Wills: Let’s take mobile, because I think it's easy to grok. Everything about the device is exposed to us. The fact that you’re playing on an iPad Mini Retina versus an iPad 1 tells us a lot about you, whether you know it or not.

Then, a lot of our users sign-in via Facebook, which is another vector for information. If you sign-in via Facebook, Facebook provides us your age range, gender, some granular location information. For every player, we get between 40 and 50 dimensions of data about that player or about that device.

That’s one bucket. But the actual gameplay is another whole bucket. What games do you choose to play in our catalog? How long do you play them? What time of day do you play them? Those start to classify users into various buckets -- from the casual commute player, who plays for 15 minutes every morning and afternoon, to the hard-core player who spends 8 to 10 hours a day, believe it or not, playing our games on their mobile devices.
Mobile doesn’t necessarily mean mobile, like out and about. A lot of our players are on their iPad, sitting on the couch in their home.

At that point, and this is a little bit of a pet peeve of mine, mobile doesn’t necessarily mean mobile, like out and about. A lot of our players are on their iPad, sitting on the couch in their home.

It’s not mobility. They’re not using 3G. They’re not using augmented reality. It’s just a device that happens to be a very convenient device for playing games. So it’s much more of a laptop replacement than any sort of mobile thing. That’s sort of a side track.

We collect all of this data, and it’s a fair amount. Right now, we’re generating about 900 million events per day across all of our players. That’s all streamed into our HP Vertica data warehouse, and there are a few tables, event time series tables, that we put the stuff into. A small table for us would be a few hundred billion records, and a large table, as I said, is 1.3 trillion records right now.

So the scale is big for us. I know that for other companies that seems like peanuts. It’s funny how big data is so broad. What’s big to one person is tiny to someone else, but this is the world that we’re dealing in right now.

We have 110 million players. Thankfully, not all of them are active at one time. That would be really big data. But we will have about 20 million at any given time in peak time playing concurrently. That’s a little bit about the numbers in our data warehouse.

Gardner: Understanding your audience through this data is something fairly new. Before, you couldn’t get this amount of data. Now that you have it, what is it able to do for you? Are you crafting new games based on your findings? Are you finding information that you can deliver back to a marketer or advertiser that links them to the audience better? There must be many things you can do.

No advertising

Wills: First of all, we don’t do any advertising in our mobile games. So that’s one piece that we’re not doing, although I know others are. But there are two broad buckets in which we use data. The first is that we run a lot of the A/B tests, experiments. All of our games are constantly being multivariate tested with different versions of that same game in the field.

We run 20 to 40 tests per week. As an example, we have a Wheel of Fortune game that we recently released, and there was all this debate about the difficulty of the puzzles. How hard should the puzzles be? Should they be very obscure pieces of Eastern literature, mainstream pop culture, or even easier?

So, we tested different levels of difficulty. Some players got the easy, some players got the medium, and some players got the hard ones. We can measure the return rate, the session duration, and the monetization for people who buy power-ups, and we see which level of difficulty performs the best. In the first test of easy, medium, hard, easy overwhelmingly did the best.

So we generated a whole bunch of new puzzles that were even easier than were the previous easy ones and tested that against what was now the control level. The easier puzzles won again. So we generated a whole new set of puzzles that were absurdly easy. We were trying to prove the point that if we gave Wheel of Fortune puzzles that are four-letter words like “bird” and “cups,” nobody would enjoy playing something that simplistic.

Well it turns that they do -- surprise, surprise -- and so that’s how we evolved into a version of Wheel of Fortune that, compared to the game show, looks very different, but it’s actually what customers want. It’s what players want. They want to relax and solve simple puzzles like “door.”
Hopefully faster than overnight. Overnight is a little too slow these days.

Gardner: So Vertica analysis determined that everyone is a winner on GSN, but you’re able to do real-time focus-group types of activities. The data -- because it's so fast, because there is so much information available and you can deal with it so quickly -- means that you’re able to tune your games to the audience virtually overnight.

Wills: Hopefully faster than overnight. Overnight is a little too slow these days. We push twice a day both to our platform code and updates to all of our games in the morning around 11 a.m and in the afternoon around 3:30. Each one of those releases is based on the data that came from the prior release.

So we're constantly evolving these games. I want to go back to your previous question, because I only got to talk about one bucket, which is this experimentation. The other bucket is using the usage patterns that customers have to evolve our product in ways that aren’t necessarily structured around an A/B test.

We thought when we launched our iPhone app that there would be a lot of commuting usage. We had in our head this hypothetical bus player, who plays on the bus in the morning. And so we thought we would build all the stuff around daily patterns. We built this daily return bonus that you can do in the morning and then again in the evening.

The data showed us that that really was only a tiny fraction of our players. There were, in fact, very few players who had this bimodal, morning and evening usage pattern. Most people didn't play at all until after dinner and then they would play a lot, sometimes even binge from 7 p.m. until 2 a.m. on games.

False assumptions

That was an area where we didn't even set up an experiment. We just had false assumptions about our player base. And that happens a surprising amount of the time. We all -- especially the game-design team and people who spent their careers designing video games -- have assumptions about their audience that half the time are just wrong. One of the things we use data for is to challenge all of our assumptions about our own products and our own businesses.

It's really gotten to a point where it's almost religious in our company. The moment two people start debating what should or shouldn't happen, they say, “Well let's just let the data decide.” That's been a core change not just for us, but for the game industry as a whole.

Because we’re here in Spain, a quick tidbit that we uncovered recently is that our main time-frame in every country on Earth, when people play games, is 7 p.m. to 11 p.m., except in Spain where it’s 1 p.m. to 3 p.m. -- siesta time. That’s just one of the examples of how we use big data to use discover insights about our players and our audiences worldwide.

Understanding the audience

Gardner: I have to imagine that the data that led you to that inference in Spain was something other than what we might consider typical structured data. How did the different data brought together allow you to understand your audience better?

Wills: We use this product from HP called Vertica, which is just a tremendous data warehouse, that lets us throw every single click, touch, or swipe in all of our games into a big table. By big, I mean right now it’s I think 1.3 trillion rows. We keep saying that we should really archive this thing. Then, we say we’ll archive it when it slows down, and then it just never slows down, so we have yet to archive it.

We put all of the click stream data in there. The traditional joins, schemas, and all of that don’t really have to happen because we have one table with all of the interactions. You have the device, the country, the player, all these attributes. It’s a very wide table. So if you want to do things like ask what is the usage in five-minute slices by country, it’s a simple SQL query, and you get your results.

Gardner: What you’re describing is very much desired by a lot of types of businesses through understanding a massive amount of data from their audience, to be able to react quickly to that, and then to stop guessing about products and pricing and distribution and logistics and supply chain and be driven purely by the data. You’re a really interesting harbinger of things to come.
One of the things we use data for is to challenge all of our assumptions about our own products and our own businesses.

Portman, tell me little bit about the process by which you were able to do this. Did you have an older data warehouse? What did you use before, and how did you make a transition to HP Vertica?

Wills: When we started the social mobile business three years ago, we were on MySQL, which we are still on for our transactional load. We have three data centers around the world. When people are playing our games, it’s recording, reading, and writing 125,000 transactions per second, and that MySQL, sharded out, works great for that.

When you want to look at your entire player base and do a cross-shard query, we found that MySQL really fell down. Our original Vertica proof of concept (POC) was just to replace these A/B test queries, which have to look across the entire population.

So in comes Vertica. We set up a single node, a Vertica data warehouse. We pull in a year's worth of data, and the same query to synthesize these sessions ran in 800 milliseconds.

So the thing that took 24 hours, which is 86,400 seconds, ran in less than one second. By the way, that 24-hour query was running across dozens of machines, and this Vertica query was running on a single server of commodity hardware.

That's when we really became believers in the power of the column store and column-oriented data warehouses. From the small beginning of just one simple query, it’s now expanded -- and pretty much our whole business runs on top of HP Vertica on the data warehouse side.

Lessons learned

Gardner: As I said, I think GSN Games is a really harbinger of what a lot of other companies in many different vertical industries will be seeking. Looking back, if you had to do it again, what might you have done differently or what suggestions might you have for others who would like to be able to do what you are doing?

Wills: I definitely wish that we had switched to a column store sooner. I think the reason that we've been so successful at this is because of our game design team, which was so open to using data.
I definitely wish that we had switched to a column store sooner.

I’ve heard hard stories from other companies where they want to use a data-driven approach, and there's just a lot of cultural inertia and push back against doing that. It's hard to be consistently proven wrong in your job, which is always what happens when you rely on data.

The real thing that's helped us get to the point we are in is a culture and a company where everybody is open to being wrong -- and open to being proven wrong by the data, which I am very thankful for.

Gardner: Well, it's good to be data-driven, and I think you should feel good being responsible for making 110 million people feel good about themselves every day.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:


Friday, August 22, 2014

Hybrid cloud models demand more infrastructure standardization, says global service provider Steria

The old model of just being an outsourcer or on-premises service provider is dead for many IT solutions providers. Instead, we’re all now in a hybrid world where will have some private-cloud solutions and multiple public clouds. The challenge is to have the right level of governance, and to be in a position to move the workloads, and adjust the workloads with the needs.

These words of wisdom come from European IT services provider Steria, which along with hundreds of its customers are charting a journey to hybrid cloud while maintaining control, automation, and reporting across all IT infrastructure.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how services standardization leads to improved hybrid cloud automation, BriefingsDirect spoke to Eric Fradet, Industrialization Director at Steria in Paris. The discussion, at the recent HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Fradet: Steria is a 40-year-old service provider company, mainly based in Europe, with a huge location in India and also Singapore. We provide all types of services related to IT, starting from infrastructure management to application management. We help to develop and deploy new IT services for all our customers.

Gardner: How are your activities at Steria helping you better deliver the choice of cloud and software-as-a-service (SaaS) to your customers?

Fradet: That change may be quicker than expected. So, we must be in a position to manage the services wherever they’re from. The old model of saying that we’re an outsourcer or on-premises service provider is dead. Today, we’re in a hybrid world and we must manage that type of world. That must be done in collaboration with partners, and we share the same target, the same ambition, and the same vision.

Benefit, not a pain

The cloud must not be seen as disruptive by our customers. Cloud is here to accompany your transformation. It must be a benefit for them, and not a pain.

Fradet
A private solution should be the best as a starting point for some customers. The full public solution should be a target. We’re here to manage their journey and to define with the customer what is the best solution for the best need.

Gardner: And in order for that transition from private to public or multiple public or sourced-infrastructure support, a degree of standardization is required. Otherwise, it's not possible. Do you have a preferred approach to standardization?

Fradet: The choice of HP as a partner was based on two main criteria. First of all, the quality of the solution, obviously, but there are multiple good solutions on the market. The second one is the capacity with HP to have a smooth transition, and that means getting to the industrialization benefits and the economic benefits while also being open and interconnected with existing IT systems.

That's why the future model is quite simple. Our work is to know we have on-premises and physical remaining infrastructure. We will have some private-cloud solutions and multiple public clouds, as you mentioned. The challenge is to have the right level of governance, and to be in a position to move the workload and adjust the workloads with the needs.
We continue to invest deeply in ITSM because ITSM is service management.

Gardner: Of course, once you've been able to implement across a spectrum of hosting possibilities, then there is the task of managing that over time, being able to govern and have control.

Fradet: With HP, we have a layer approach which is quite simple. First of all, if you want to manage, you must control, as you mentioned. We continue to invest deeply in IT Service Management (ITSM) because ITSM is service governance. In addition, we have some more innovative solutions based on the last version of  Cloud Services Automation (CSA). Control, automate, and report remain as key whatever the cloud or non-cloud infrastructure.

Gardner: Of course, another big topic these days is big data. I would think that a part of the management capability would be the ability to track all the data from all the systems, regardless of where they’re physically hosted. Do you have a preference or have you embarked on a big-data platform that would allow you to manage and monitor IT systems regardless of the volume, and the location?

Fradet: Yes, we have some very interesting initiatives with HP around HAVEn, which is obviously one of the most mature big-data platforms. The challenge for us is to transform a technologically wonderful solution into a business solution. We’re working with our business units to define use-cases that are totally tailored and adjusted for the business, but big data is one of our big challenges.

Traditional approach

Gardner: Have you been using a more traditional data-warehouse approach, or are you not yet architecting the capability? Are you still in a proof-of-concept stage?

Fradet: Unfortunately, we have hundreds of data-warehouse solutions, which are customer-dedicated, starting from very old-fashioned level to operational key performance indicators (KPI) to advanced business intelligence (BI).

The challenge now is really to design for what will be top requirements for the data warehouse, and you know that there is a mix of needs in terms of data warehouses. Some are pure operational KPIs, some are analytics, and some are really big data needs. To design the right solution for the customer remains a challenge. But, we’re very confident that with HAVEn, sometime in 2014, we will have the right solution for those issues.

Gardner: Lastly, Eric, the movement toward cloud models for a lot of organizations is still in the planning stages. They are mindful of the vision, but they have also IT  housecleaning to do internally. Do you have any suggestions as to how to properly modernize, or move toward a certain architecture that would then give them a better approach to cloud and set them up for less risk and less disruption? What are some observations that you have had for how to prepare for moving toward a cloud model?
Cloud can offer many combinations or many benefits, but you have to define as a first step your preferred benefits.

Fradet: As with any transformation program, the cloud’s eligibility program remains key. That means we have to define the policy with the customer. What is their expectation -- time to market, cost saving, to be more efficient in terms of management?

Cloud can offer many combinations or many benefits, but you have to define as a first step your preferred benefits. Then, when the methodology is clearly defined, the journey to the cloud is not very different than from any other program. It must not be seen as disruptive, keeping in mind that you do it for benefits and not only for technical reasons or whatever.

So don't jump to the cloud without having strong resources below the cloud.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, August 11, 2014

Service providers gain new levels of actionable customer intelligence from big data analytics

It’s no secret that communication service providers (CSPs) are under a lot of pressure as they make massive investments in upgraded networks while facing shrinking margins and revenues from their eroding traditional voice or broadcasting businesses.

Traditional operators understand that they must go beyond what they did before. They need to offer more compelling services to reduce churn and acquire new customers. But how to know what services customers want most, and how much to charge for them?

A key asset CSPs have is the huge amount of information that they generate and maintain. And so it's the analytics from their massive data sets that becomes the go-to knowledge resource as CSPs re-invent themselves.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.
Our next Big Data innovation discussion therefore explores how the telecommunication service-provider industry is gaining new business analytic value and strategic return through the better use and refinement of their Big Data assets.

To learn more about how analytics has become a business imperative for service providers, peruse this interview with Oded Ringer, Worldwide Solution Enablement Lead for HP Communication and Media Solutions. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the major trends leading CSPs to view themselves as being more data-driven organizations?

Ringer: CSPs are under a lot of pressure. On one hand, this industry has never been more central. Everybody is connected, spending so much more time online than ever before, and carrying with them small devices through which they connect to the network. So CSPs are central to our work and personal lives – as a result, they’re under lot of pressure.

Ringer
They’re under a lot of pressure, because they’re required to make massive investments in the networks, but they also need to deal with shrinking margins and revenues to subsidize these investments. So, at the end of the day, they’re squeezed between these two motions. 

One approach many CSPs have adopted in the last year was to reduce cost and to cut operations. But this is pretty much a trip to nowhere. Going into most basic services and commodity services is no way for these kinds of things to survive. 

In the last two to three years, more and more traditional operators understand that they must go beyond what they did before. They need to offer more compelling services to reduce churn and acquire new customers. They need to leverage their position as a central place between consumers and what they are looking for to become some kind of brokers of information

The key asset they have in their hand to become such brokers is the huge amount of information that they maintain. It’s exactly where analytics comes into play.

Talking about mobile

Gardner: When we say CSP and telecommunication companies these days, we’re more and more talking about mobile, right? How big a shift has mobile been in terms of the need to analyze use patterns and get to know what's really happening out in the mobile network?

Ringer: Mobile services are certainly the leading tool in most operator’s arsenals. Operators that have the subscriber “connected” with them wherever they go, around the clock, have an advantage over those that are more dependent upon or only provide tethered services. 

But we need to keep in mind that there’s also a whole space for analytics solutions that are related to fixed-line services, like cablesatellitebroadband, and other, landline services. CSPs are investing a lot in becoming more predictive, finding out what the subscriber really wants, what the quality of those services are at any given time, and how we can reduce churn in their customer base. 

Another kind of analytics practices that operators take is trying to be predictive in their investments in the network, understanding which network segments are used by more high-worth individuals, those that they do want to improve service to, beefing up those networks and not the other networks.

Again, it’s these mobile operators who are on the front lines of doing more with subscriber data and information in general, but it is also true for cable operators and pay-TV operators, and landline CSPs.
CSPs, unlike most enterprises, need to handle not only the structured data that’s coming from databases and so on, but also unstructured data.

Gardner: Oded, what are some of the data challenges specific to CSPs?

Ringer: In the CSP industry, Big Data is bigger than in any other industry. Bigger, first of all, in volume. There is no other industry that runs this amount of data – if you take into consideration they’re carrying everybody’s data, consumer and enterprise. But that’s one aspect and is not even the most complicated one. 

The more complicated thing is the fact that CSPs, unlike most enterprises, need to handle not only the structured data that’s coming from databases and so on, but also unstructured data, such as web communication, voice communication, and video content. They want to analyze all those things, and this requires analyzing unstructured data. 

So that’s a significant change in that type of process flow. They are also facing the need to look at new sets of structured data, data from IT management and security log files, from sensors and end-point mobile device telematics, cable set-top boxes, etc.

And two, in the CSP industry, because everything is coming from the wire, there’s no such thing as off-line analytics or batch analytics. Everything needs to be real-time analytics. Of course, this doesn’t mean that there will not be off-line or batch analytics, but even these are becoming more complex and span many more data sets across multiple enterprise silos.

More real time

If you analyze subscriber behavior right now and you want to make an offer to improve the experience that he’s having in real time, you need to capture the degradation of service right now and correlate it with what you know about the subscriber right now. So it's so much more real time than in any other industry. 
The market is still young. So it's very hard to say which one will be more dominant.

We’re not talking here about projects of data consolidation. It may be necessary in some cases, but that’s not really the practice that we’re talking about here. We’re talking about federating, referring to external information, analyzing in the context of the logic that we want to apply, and making real-time decisions.

In short, CSP Big Data analytics is Big Data analytics on steroids.

Gardner: What does a long-term solution look like, rather than cherry picking against some of these analytics requirements? Is there a more strategic overview approach that would pay off longer term and put these organizations in a better position as they know more and more requirements will be coming their way?

Ringer: Actually we see two kinds of behaviors. The market is still young. So it's very hard to say which one will be more dominant. We see some CSPs that are coming to us with a very clear idea on what business process they want to implement and how they believe a data-driven approach can be applied to it. 

They have clear model, a clear return on investment (ROI) and they want to go for it and implement it. Of course, they need the technology, the processes, and the business projects, but their focus is pretty much on a single use case or a variety of use cases that are interrelated. That’s one trend.

There’s another trend in which operators say they need to start looking at their data as an asset, as an area that they want to centralize. They want to control it in a productive manner, both for security, for privacy, and for the ability to leverage it to different purposes.

Central asset

Those will typically come with a roadmap of different implementations that they would like to do via this Big Data facility that they have in mind and want to implement. But what’s more important for them is not the quickest time to launch specific processes, but to start treating the data as a central asset and to start building a business plan around it. 

I guess both trends will continue for quite some while, but we see them both in the market sometimes even in the same company in different organizations.

Gardner: How does a CSP can really change their identity from being a pipe, a conduit, to being more of a rich services provider on top of communications?

And what is it that HP is bringing to the table? What is it about HP HAVEn, in particular, that is well suited to where the telecommunications industry is going and what the requirements are?

Ringer: HP has made huge investments in the space of Big Data in general and analytics in particular, both in-house developments, multiple products, as well as acquisitions of external assets. 

Complete platform

HAVEn is now the complete platform that includes multiple best-in-class product elements based multiple, cutting edge yet proven technologies, for exploiting Big Data and analytics. Our solution for the space is pretty much based on HAVEn and expanded with specific solutions for CSP needs, with a wide gallery of connectors for external data sources that exist within the CSP space. 

In short, we’re taking HAVEn and using it for the CSP industry with lots of knowledge about what traditional CSP operators need to become next-generation CSPs. Why? 

Because we have a very large group within HP of telecom experts who interact with and leverage what we’re doing in other industries and with many of the new age service providers like the AmazonsGooglesFacebook and Twitters of the world. We go a long way back in expertise in telecom -- but combine this with forward thinking customers and our internal visionaries in HP Labs and across our business units. 

Gardner: Just to be clear for our audience, HAVEn translates to HadoopAutonomyVertica, and Enterprise Security, along with a whole suite of horizontally and vertically integrated set of applications that are vertical industry specific. Is that right?
It’s coming from the business people that understand that they need to do something with the data and monetize it.

Ringer: Exactly.

Gardner: Tell me what you do in terms of how you reach out to communications organizations. Is there something about meeting them at the hardware level and then alerting them to what these other Big Data capabilities are? Is this a cross-discipline type of approach? How do you actually integrate HP services and then take that and engage with these CSPs?

Ringer: Those things exist, like engaging at a hardware level, but those are the less common go-to-market motions that we see. The more popular ones are more top-down, in the sense that we are meeting with business stakeholders who wants to know how to leverage Big Data and analytics to improve their business. 

They don’t care about the data other than how it’s going to be result in actionable intelligence. So, at the CSP level, it can be with marketing officers within the CSP who are looking to create more personalized services or more sticky services to increase the attention of their subscribers. They’re looking to analytics for that. 

It can be with business-development managers within the CSP organization that are looking to create models of collaboration with the Yahoos and Facebooks of the world, with retailers, or with any kind of other participants of their ecosystem where they can bring the ability to provide the pipe, back-end hosting of services and intelligence about how the pipe is providing the services and the sentiment of the customers on the other end of the pipe. 

They want to share information of value to their customers, making them dependent on them in new ways that aren’t just about the pipe thereby gaining new revenue streams. That’s the kind of motivation they have. It can be with IT folks as well, but at the end of the day the discussion about CSP Big Data isn’t coming from the technology. It’s coming from the business people that understand that they need to do something with the data and monetize it.

Then, of course, it becomes pretty quickly a technical discussion that the motion is business to technology, rather than infrastructure to technology. 

Support practice

We also developed the support practice within our organization that does exactly that, business advisory workshops. It’s for stakeholders of different roles to realize what the priorities are in using Big Data. What is the roadmap that they want to implement? 

The purpose of this exercise is to quickly bring everybody to the same room, sit together for a day or two, and come out with an agreement on how to turn themselves from conventional services to more personalized services and diversify the business channels via using information data.

For several years now, we have one large customer, Telefónica a Latin American conglomerate, has been working with us on analytics projects to improve the quality of experience of their subscribers. 

In Latin America, most people are interested in football, and many of them want to watch it on their mobile device. The challenge is that they all want to watch it during the same 90 minutes. That’s a challenge for any mobile operator, and that’s exactly where we started a critical project with Telefónica. 

We’re helping them analyze the quality of experience. Realizing the quality of the experience isn’t a very complicated thing. There are probes in the network to do that. We can pretty accurately get the quality of experience for every single video streaming session. It’s no big deal.

Analytics kicks in when you want to correlate this aggregation of quality with who the subscriber is, how the subscriber is expected to behave, and what he’s interested in. We know that the quality isn’t good enough for many subscribers during the football game, but we need to differentiate and know to which one of them we want to make an offer to upgrade his package. What’s the right offer? When’s the right time to make the offer? How many different offers do we test to zero in on the best set of offers?

We want to know which one of them we don’t want to promote anything to, but just want to make him happy. We want to give him a better quality experience for free, because he is a good customer and we don’t want to lose him. And we want to know which customer we want to come back to later, apologize, and offer him a better deal.

Real-time analytics

Based on real-time triggering of events from the network, degradation of quality with information that is ongoing about the subscriber, who the subscriber is, what marketing segment he belongs to, what package is he subscribed to and so on, we do the analytics in real time, and decide what the right action is and what the right move is, in order for us to give the best experience for the individual subscriber. 

It’s working very nicely for them. I like this example, first of all, because it’s real, but also because it shows the variety of processes we have here with correlation of real-time information with ongoing information for the subscribers. We have contextual action that is taken to monetize and to improve quality and to improve satisfaction. 

This example touches so many needs of an operator and is all done in a pretty straightforward manner. The implementation is rather simple. It’s all based on running the right processes and putting the right business process in place. But this isn’t always straightforward for enterprise customers, particularly those in the small to medium enterprise segment so imagine what CSPs could do for their customers once they’ve gotten a handle on this for their own businesses.
We have contextual action that is taken to monetize and to improve quality and to improve satisfaction. 

Gardner: It seems to me that that helps reduce the risk of a provider or their customers coming out with new services. If they know that they can adjust rapidly and can make good on services, perhaps this gives them more runway to take off with new services, knowing that they can adjust and be more agile. It seems like it really fundamentally changes how well they can do their business.

Ringer: Absolutely. It also reduces quite a lot the risk of investment. If you launch a new service and you find out that you need to beef up your entire network, that is a major hit for your investment strategy. At the same time, if you realize that you can be very granular and very selective in your investment, you can do it much more easily and justify subsequent investments more clearly.

Gardner: Are there any other examples of how this is manifesting itself in the market -- the use of Big Data in the telecommunication’s industry? 

Ringer: Let me give another example in North America. This is an implementation that we did for a large mobile operator in North America, in collaboration with a chain of retail malls. 

What we did there is combine their ongoing information that the mobile operator has about its subscribers -- he knows what the subscriber is interested in, what they’re prior buying pattern and transactions were and so on -- with the location information of where the individual person is at the mall. 

The mall operator runs a private wi-fi network there, so he has his own system of being able to track where the individual is exactly within the mall. He knows within two meters where a person is in the mall but with the map overlay of the physical mall and all product and service offerings to the same grid.

When we know a person is in the mall, we can correlate it with what the CSP knows about this person already. He knows that the specific person has high probability of looking for a specific running shoe. The mobile operator knows it because he tracks the web behavior of the specific individual. He tracks the profile of the specific individual and he can have pretty good accuracy in telling that this guy, for the right offer, will say yes for running shoes. 

Targeted and timely

So combining these two things, the ongoing analytics of the preferences, together with real-time location information, give us the ability to push out targeted and timely promotions and coupons.

Imagine that you go in the mall and suddenly you pass next to the shoe store. Here, your device pops up a message and that says right now, Nike shoes are 50 percent off for the next 15 minutes. You know that you’re looking for Nike shoes. So the chance that you’ll go into the store is very good, and the results are very good because you create a “buy-now or you’ll miss-out” feeling in the prospect. Many subscribers take the coupons that are pushed to them in this way. 

Of course, it’s all based on opt-in, and of course, it’s very granular in the sense that there are analytics that we do on subscriber information that is opted in at the level of what they allow us to look at. For instance, a specific person may allow us to look at his behavior on retail sites, but not on financial sites. 

Gardner: Again, this shows a fundamental shift that the communications provider is not just a conduit for information, but can also offer value-added services to both the seller and the buyer -- radically changing their position in their markets. 

If I am an organization in the CSP industry and I listen to you and I have some interest in pursuing better Big Data analytics, how do I get started? Where can I go for more information? What is it that you’ve put together that allows me to work on this rather quickly?

Ringer: As I mentioned before, we typically recommend engaging in a two-day workshop with our business consultants. We have a large team of Big Data advisory consultants, and that’s exactly what they do. They understand the priorities and work together with the telecom organizations to come up with some kind of a roadmap -- what they want to do, what they can do, what they are going to do first, and what they are going to do later. 
They all look to become more proactive, they all realize that data is an asset and is something that you need to keep handy, keep private, and keep secured.

That’s our preferred way of approaching this discipline. Overall, there are so many kinds of use cases, and we need to decide where to start. So that’s how we start. To engage, the best place is to go to our website. We have lots of information there. The URL is hp.com/go/telcoBigData, that’s one word, and from there you just click Contact Us, and we’ll get back to you. We’ll take you from there. There are no commitments, but chances are very good.

Gardner: Before we sign off, I just wanted to look into the future. As you pointed out, more and more entertainment and media services are being delivered through communication providers. The mobile aspect of our lives continues to grow rapidly. And, of course, now that cloud computing has become more prominent, we can expect that more data will be available across cloud infrastructures, which can be daunting, but also very powerful. Where do you see the future challenges, and what are some of the opportunities?

Ringer: We can summarize four main trends that we’re seeing increasing and accelerating. One is that CSPs are becoming more active in enabling new business models with partnerships, collaborations, internet players, and so on. This is a major trend. 

The second trend that we see increasing quite intensively is operators becoming like marketing organizations, promoting services for their own or for others.

The third one is more related to the operation of the CSP itself. They need to be more aware of where they invest, what’s their risk and probability of seeing an specific ROI and when will that occur. In short, Big Data and Analytics will make them smarter and more proactive in making the investments. That’s another driver that increases their interest in using the data. 

Overall they all look to become more proactive, they all realize that data is an asset and is something that you need to keep handy, keep private, and keep secured, but be able to use it for variety of use cases and processes to be ready for the next move. 

Monday, August 4, 2014

A gift that keeps giving, software-defined storage now showing IT architecture-wide benefits

The next BriefingsDirect deep-dive discussion explores how one of the most costly and complex parts of any enterprises IT infrastructure -- storage -- is being dramatically improved by the accelerating adoption of software-defined storage (SDS).

The ability to choose low-cost hardware, to manage across different types of storage, and radically simplify data storage via intelligent automation means a virtual rewriting of the economics of data.

But just as IT leaders seek to simultaneously tackle storage pain points of scalability, availability, agility, and cost -- software-defined storage is also providing significant strategic- and architectural-level benefits.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We're joined by two executives from VMware to unpack these efficiencies and examine the broad innovation behind the rush to exploit software-defined storage, Alberto Farronato, Director of Product Marketing for Cloud Infrastructure Storage and Availability at VMware, and Christos Karamanolis, Chief Architect and a Principal Engineer in the Storage and Availability Engineering Organization at VMware. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Software-defined storage is changing something more fundamental than just data and economics of data. How do you see the wider implications of what’s happening now that software-defined storage is becoming more common?

Farronato: Software-defined storage is certainly about addressing the cost issue of storage, but more importantly, as you said, it’s also about operations. In fact, the overarching goal that VMware has is to bring to storage the efficient operational model that we brought to compute with server virtualization. So we have a set of initiatives around improving storage on all levels, and building a parallel evolution of storage to what we did with compute. We're very excited about what’s coming.

Gardner: Christos, one of my favorite sayings is that "architecture is IT destiny." How you see software-defined storage at that architectural level? How does it change the game?

Concept of flexibility

Karamanolis: The fundamental architectural principle behind software-defined storage is the concept of flexibility. It's the idea of being able to adapt to different hardware resources, whether those are magnetic disks, flash storage, or other types of non-volatile memories in the future.

Karamanolis
How does the end user adapt their storage platform to the needs they have in terms of the capabilities of the hardware, the ratios of the different types of storage, the networking, the CPU resources, and the memory resources needed for executing and providing their service to what's ahead?

That’s one part of flexibility, but there is another very interesting part, which is a very acute problem for VMware customers today. Their operational complexity of provisioning storage for applications and virtual machines (VMs) has been one way of packaging applications.

Today, customers virtualize environments, but also in general have to provision physical storage containers. They have to anticipate their uses over time and have make an investment up front in resources that they'll need over a long period of time. So they create those logical unit number (LUN) file services, or whatever that is needed, for a period of time that spans anything from weeks to years.

Software-defined storage advocates a new model, where applications and VMs are provisioned at the time that the user needs them. The storage resources that they need are provisioned on-demand, exactly for what the application and the user needs -- nothing more or less.

The idea is that you do this in a way that is really intuitive to the end-user, in a way that reflects the abstractions that user understands -- applications, the data containers that the applications need, and the characteristics of the application workloads.


So those two aspects of flexibility are the two fundamental aspects of any software-defined storage.

Gardner: As we see this increased agility, flexibility, the on-demand nature of virtualization now coupled with software-defined storage, how are organizations benefiting at a business level?

Farronato
Farronato: There are several benefits and several outcomes of adopting software-defined storage. The first that I would call out is the ability to be much more responsive to the business needs -- and the changing business needs -- in the form of what your application needs faster.

As Christos was saying, in the old model, you had to guess ahead of time what the applications will need, spend a lot of time trying to preconfigure and predetermine the various services levels, performance, availability and other things that our storage really would be required by your application, and so spend a lot of time setting things up, and then hopefully, down the line, consume it the way you thought you would.

Difficult change management

In many cases, this causes long provisioning cycles. It causes difficult change management after you provision the application. You find that you need to change things around, because either the business needs have changed or what you guessed was wrong. For example, customers have to face constant data migration.

With the policy-driven approach that Christos has just described -- with the ability to create these storage services on-the-fly for a policy approach -- you don’t have to do all that pre-provisioning and preconfiguring. As you create the VMs and specify the requirements, the system responds accordingly. When you have to change things, you just modify the policy and everything in the underlying infrastructure changes accordingly.

Responsiveness, in my opinion, is the one biggest benefit that IT will deliver to the business by shifting to software-defined storage. There are many others, but I want to focus on the most important one.
When you have to change things, you just modify the policy and everything in the underlying infrastructure will change accordingly.

Gardner: Can you explain what happens when software-defined storage becomes strategic at the applications level, perhaps with implications across the entire data lifecycle?

Karamanolis: One thing we already see, not only among VMware customers, but as a more generic trend, is that infrastructure administrators -- the guys who do the heavy-lifting in the data centers day in and day out, who manage much more beyond what is traditionally servers and applications -- are getting more and more into managing networks and data storage.

Find SDS technical insights and best practices on the VSAN storage blog.

Talking about changing models here, what we see is that tools have to be developed and software-defined storage is a key technology evolution behind that. These are tools for those administrators to manage all those resources that they need to make their day-to-day jobs happen.

Here, software-defined storage is playing a key role. With technology like Virtual SAN, we make the management of storage visible for people who are not necessarily experts in the esoterics of a certain vendor's hardware. It allows more IT professionals to specify the requirements of their applications.

Then, the software storage platform can apply those requirements on the fly to provision, configure, and dynamically monitor and enforce compliance for the policy and requirements that are specified for the applications. This is a major shift we see in the IT industry today, and it’s going to be accelerated by technologies like Virtual SAN.

Gardner: When you go to software-defined storage, you can get to policy level, automation, and intelligence when it comes to how you're executing on storage. How does software-defined storage simplify storage overall?

Distributed platform

Karamanolis: That's an interesting point, because if you think about this superficially, we’ll now go from a single, monolithic storage entity to a storage platform that is distributed, controlled by software, and can span tens or sometimes hundreds of physical nodes and/or entities. Isn’t complexity harder in the latter case?

The reality is that whether it's because of necessity or because we've learned a lot over the last 10 to 15 years about how to manage and control large distributed systems, that there is a parallel evolution of these ideas of how you manage your infrastructure, including the management of storage.
The user has to be exposed to the consequences of the policy they choose. There is a cost there for every one of those services.

As we alluded to already, the fundamental model here is that the end user, the IT professional that manages this infrastructure, expresses in a descriptive way, what they need for their applications in terms of CPU, memory, networking, and, in our case, storage.

What do I mean by descriptive? The IT professional does not need to understand all the internal details of the technologies or the hardware used at any point in time, and which may evolve over a period of time.

Instead, they express at a high level a set of requirements -- we call them policies -- that capture the requirements of the application. For example, in the case of storage, they specify the level of availability that is required for certain applications and performance goals, and they can also specify things like the data protection policies for certain data sets.


Of course, for all those things, nothing comes for free. So the user has to be exposed to the consequences of the policy that they choose. There is a cost there for every one of those services.

But the key point is that the software platform automatically configures the appropriate resources, whether they're arrayed across multiple physical devices, arrayed across the network, or whether they get asynchronous data as specified in a remote location in order to comply with certain disaster recovery (DR) policies.

All those things are done by the software, without the user having to worry about whether the storage underneath is highly available storage, in which case they need to be able to create only two copies of the data, or whether it is of some low-end hardware for which that would require three or four copies of the data. All those things are determined automatically by the platform.

This is the new mode. Perhaps I'm oversimplifying some of these problems, but the idea is that the user should really not have to know the specific hardware configurations of a disk array. If the requirements can not be met, it is because these new technologies are not incorporated into the storage platform.

Policy driven

Farronato: Virtual SAN is a completely policy-driven product, and we call it VM-centric or application-centric. The whole management paradigm for storage, when you use Virtual SAN, is predicated around the VM and the policies that you create and you assign to the VMs as you create your VMs, as you scale your environment.

One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store. In the past, you had to create individual LUNs or volumes, assign data services like replication or RAID levels to each individual volume, and then map the application to them.

With Virtual SAN, you're simply going to have a capacity container that happens to be distributed across a number of nodes in your cluster -- and everything that happens from that point on is just dropping your VMs into this container. It automatically instantiates all the data services by virtue of having built-in intelligence that interprets the requirements of the policy.
One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store.

That makes this system extremely simple and intuitive to use. In fact, one of the core design objectives of Virtual SAN is simplicity. If you look at a short description of the system, the radically simple hypervisor-converged storage means bringing that idea of eliminating the complexity of storage to the next level.

Gardner: We've talked about simplicity, policy driven, automation, and optimization. It seems to me that those add up very quickly to a fit-for-purpose approach to storage, so that we are not under-provisioning or over-provisioning, and that can lead to significant cost-savings.

So let’s translate this back to economics. Alberto, do you have any thoughts on how we lower total cost of ownership (TCO) through these SDS approaches of simplicity, optimization, policy driven, and intelligence?


Farronato: There are always two sides of the equation. There is a CAPEX and an OPEX component. Looking at how a product like Virtual SAN reduces CAPEX, there are several ways, but I can mention a couple of key components or drivers.

First, I'd call out the fact that it is an x86 server-based storage area network (SAN). So it leverages server-side components to deliver shared storage. By virtue of using server-side resources right off the bat there are significant savings that you can achieve through lower-cost hardware components. So the same hard drive or solid-state drive (SSD) that you deploy on a shared external storage array could be on the order of 80 percent cheaper.

The other aspect that I would call out that reduces the overall CAPEX cost is more along the lines of this, as you said, consume on-demand approach or, as we put it in many other terms, grow-as-you-go. With a scale-out model, you can start with a small deployment and a small upfront investment.

You can then progressively scale out as your environment grows by the much finer granularity that you would with a monolithic array. And as you scale, you scale both compute, but also IOPs  and that goes hand in hand with often the number of VMs that you are running out of your cluster.

System growth
 
So the system grows with the size of your environment, rather than requiring you to buy a lot of resources upfront that many times remain under-utilized for a long time.

On the OPEX side, when things become simpler, it means that overall administration productivity increases. So we expect a trend where individual administrators will be able to manage a greater amount of capacity, and to do so in conjunction with management of the virtual infrastructure to achieve additional benefits.

Gardner: Virtual SAN has been in general availability now for several months, since March 2014, after being announced last year at VMworld 2013. Now that it’s in place and growing in the market, are there any unintended benefits or unintended consequences from that total-cost perspective in real-world day-in, day-out operations?
The system grows with the size of your environment, rather than require you to buy a lot of resources upfront that many times remain under-utilized for a long time.

I'm looking for ways in which a typical organization is seeing software-defined storage benefiting them culturally and organizationally in terms of skills, labor, and that sort of softer metric.

Karamanolis: That’s a very interesting point. Our technologists sometimes tend to overlook the cultural shifts that technology causes in the field. In the case of Virtual SAN, we see a lot of, as one customer put it, being empowered to manage their own storage, in the vertical that we are controlling in their IT organization, without having to depend on the centralized storage organization in this company.

Find SDS technical insights and best practices on the VSAN storage blog.

What we really see here is a shift in paradigm about how our customers use Virtual SAN today to enable them to have a much faster turnaround for trying new applications, new workloads, and getting them from test and dev into production without having to be constrained by the processes and the timelines that are imposed by a central storage IT organization.

This is a major achievement, and the major tool for VMware administrators in the field, which we believe is going to lead the way to a much wider adoption of Virtual SAN and software-defined storage in general.

Gardner: How does this simplification and automation have a governance, risk, and compliance (GRC) benefit?

Farronato: With this approach you have a more granular way to control the service levels that you deliver to your customers, to your internal customers, and a more efficient way to do it by standardizing through polices rather than trying to standardize service levels over a category of hardware.

Self-service consumption

You can more easily keep track of what each individual application is receiving, whether it’s in compliance to that particular policy that you specified. You can also now enable self-service consumption more easily and effectively.

We have, as part of our Policy-Based Management Engine, APIs that will allow for integration with cloud automation frameworks, such as vCloud Automation Center for OpenStack, where end users will be able to consume a predefined category of service.

It will speed up the provisioning process, while at the same time, enabling IT to maintain that control and visibility that all the admins want to maintain over how the resources are consumed and allocated.
You can also now enable self-service consumption more easily and effectively.

Gardner: I suppose there are as many on-ramps to software-defined data center as there are enterprises. So it's interesting that it can be done at that custom level, based on actual implementation, but also have a strategic vision or a strategic architectural direction. So, it's future-proof as well as supporting legacy.

How about some examples? Do we have either use-case scenarios or an actual organization that we can look to and say that they have deployed these VSAN and they have benefited in certain ways and they are indicative of what others should expect? 

Farronato: Let me give you some statistics and some interesting facts. We can look at some of the early examples where, in the last three months since the product has become available, we've found a significant success already in the marketplace, with a great start in terms of adoption from our customers.

Find SDS technical insights and best practices on the VSAN storage blog.

We already have more than 300 paying customers in just one quarter. That follows the great success of the public beta that ran through the fall and the early winter with several thousand customers testing and taking a look at the product. 

We are finding that virtual desktop infrastructure (VDI) is the most popular use case for Virtual SAN right now. There are a number of reasons why Virtual SAN fits this model from the scale out, as well as the fact that the hyper-converged storage architecture is particularly suitable to address the storage issues of a VDI deployment.

DevOps, or if you want, preproduction environments, loosely defined as test dev, is another area. There are disaster recovery targets in combination with vSphere Replication and Site Recovery Manager. And some of the more aggressive customers are also starting to deploy it in production use cases.
In the last three months since the product has become available, we've found a significant success already in the marketplace.

As I said, the 300 customers that we already have span the gamut in terms of size and names. We have large enterprises, banking, down to the smaller accounts and companies, including education or smaller SMBs. 

There are a couple of interesting cases that we'll be showcasing at VMworld 2014 in late-August. If you look at the session list, they're already available as actual use cases presented by our customers themselves.

Adobe will be talking about their massive implementation of Virtual SAN. And for their our production environment, on their data analytics platform, there will be another interesting use case with TeleTech talking about how they have leveraged Cisco UCS to progress VDI deployments.

VDI equation

Gardner: I'd like to revisit the VDI equation for a moment, because one of the things that’s held people up is the impact on storage, and the costs associated with the storage to support VDI. But if you're able to bring down costs by 50 percent, in some cases, using software-defined storage. That radically changes the VDI equation. Isn’t that the case, Christos, where you can now say that you can do VDI cheaper than almost any other approach to a virtualized desktop?

Karamanolis: Absolutely, and the cost of storage is the main impediment in organizations to implement a VDI strategy. With Virtual SAN, as Alberto mentioned earlier, we provide a very compelling cost proposition, both in terms of the capacity of the storage, as well as the performance you gain out of the storage.
You get the needs, both capacity and performance of your VDI workloads for a fraction of the cost you would pay for with a traditional disk array storage.

Alberto already touched on the cost of the capacity, referring to the difference in prices one can get from server vendors and from the market, as opposed to single hardware being procured as part of a traditional disk array.

I'd like to touch on something that is an unsung hero of Virtual SAN and of VDI deployment especially, and that's performance. Virtual SAN, as should be clear by now, is a storage platform that is strongly integrated with our hypervisor. Specifically, the data path implementation and the distributed protocols that are implemented in Virtual SAN are part of the ESXi kernel.

That means that, because of that, we can actually achieve very high performance goals, while we minimize the CPU cycles that are consumed to serve those high I/Os per second. What that means, especially for VDI, is that we use a small slice of the CPU and memory of every single ESXi host to implement this distributed software-driven storage controller.


It doesn't affect all the VMs that run on the same ESXi host, who have already published extensive and detailed performance evaluations, where we compare VDI deployments only on Virtual SAN versus using an external disk array.

And even though Virtual SAN use percentage is cut to be 10 percent of local CPU and memory on those hosts, the consolidation ratio, the number of virtual desktops we run on those clusters, is virtually unaffected, while we get the full performance that is realized with an external, all-flash disk array. So this is the value of Virtual SAN in those environments.

Essentially, you get the needs, both capacity and performance of your VDI workloads, for a fraction of the cost you would pay for with a traditional disk array storage.

Gardner: We're only a few weeks from VMworld 2014 in San Francisco, and I know there's going to be a lot of interest in mobile and in desktop infrastructure for virtualized desktops and applications.

Do you think that we can make some sort of a determination about 2014? Maybe this is the year that we turn the corner on VDI, and that that is a bigger driver to some of these higher efficiencies. Any closing thoughts on the vision for software-defined data center and VDI and the timing with VMworld. Alberto?

Last barrier

Farronato: Certainly, one of the goals that we set ourselves for this Virtual SAN release, solving the VDI use case, eliminating probably the last barrier, and enabling a broader adoption of VDI across the enterprise, and we hope that will materialize. We're very excited about what the early findings show.

With respect to VMworld and some of the other things that we'll be talking about at the conference with respect to storage, we'll continue to explain our vision of software-defined storage, talk about the Virtual SAN momentum, some of the key initiatives that we are rolling out with our OEM partners around things such as Virtual SAN Ready Nodes.

We're going to talk about how we will extend the concept of policy management and dynamic composition of storage services to external storage, with a technology called Virtual Volumes.

There are many other things, and it's gearing up to be a very exciting VMworld Conference for storage-related issues.


Gardner: Last word to you, Christos. Do you have any thoughts about why 2014 is such a pivotal time in the software-defined storage evolution?

Karamanolis: I think that this is the year where the vision that we've been talking about, us and the industry at large, is going to become real in the eyes of some of the bigger, more conservative enterprise IT organizations.

With Virtual SAN from VMware, we're going to make a very strong case at VMworld that this is a real enterprise-class storage system that's applicable across a very wide range of use cases and customers.

With actual customers using the product in the field, I believe that it is going to be a strong evidence for the rest of the industry that software-defined storage is real, it is solving real world problems, and it is here to stay.

Together with opening up some of the management APIs that Virtual SAN uses in VMware products to third parties through this Virtual Volumes technology that Alberto mentioned, we'll also be initiating an industry-wide initiative of making, providing, and offering software-defined storage solutions beyond just VMware and the early companies, mostly startups so far, that have been adopting this model. It’s going to become a key industry direction.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in: