Monday, August 4, 2014

A gift that keeps giving, software-defined storage now showing IT architecture-wide benefits

The next BriefingsDirect deep-dive discussion explores how one of the most costly and complex parts of any enterprises IT infrastructure -- storage -- is being dramatically improved by the accelerating adoption of software-defined storage (SDS).

The ability to choose low-cost hardware, to manage across different types of storage, and radically simplify data storage via intelligent automation means a virtual rewriting of the economics of data.

But just as IT leaders seek to simultaneously tackle storage pain points of scalability, availability, agility, and cost -- software-defined storage is also providing significant strategic- and architectural-level benefits.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We're joined by two executives from VMware to unpack these efficiencies and examine the broad innovation behind the rush to exploit software-defined storage, Alberto Farronato, Director of Product Marketing for Cloud Infrastructure Storage and Availability at VMware, and Christos Karamanolis, Chief Architect and a Principal Engineer in the Storage and Availability Engineering Organization at VMware. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Software-defined storage is changing something more fundamental than just data and economics of data. How do you see the wider implications of what’s happening now that software-defined storage is becoming more common?

Farronato: Software-defined storage is certainly about addressing the cost issue of storage, but more importantly, as you said, it’s also about operations. In fact, the overarching goal that VMware has is to bring to storage the efficient operational model that we brought to compute with server virtualization. So we have a set of initiatives around improving storage on all levels, and building a parallel evolution of storage to what we did with compute. We're very excited about what’s coming.

Gardner: Christos, one of my favorite sayings is that "architecture is IT destiny." How you see software-defined storage at that architectural level? How does it change the game?

Concept of flexibility

Karamanolis: The fundamental architectural principle behind software-defined storage is the concept of flexibility. It's the idea of being able to adapt to different hardware resources, whether those are magnetic disks, flash storage, or other types of non-volatile memories in the future.

Karamanolis
How does the end user adapt their storage platform to the needs they have in terms of the capabilities of the hardware, the ratios of the different types of storage, the networking, the CPU resources, and the memory resources needed for executing and providing their service to what's ahead?

That’s one part of flexibility, but there is another very interesting part, which is a very acute problem for VMware customers today. Their operational complexity of provisioning storage for applications and virtual machines (VMs) has been one way of packaging applications.

Today, customers virtualize environments, but also in general have to provision physical storage containers. They have to anticipate their uses over time and have make an investment up front in resources that they'll need over a long period of time. So they create those logical unit number (LUN) file services, or whatever that is needed, for a period of time that spans anything from weeks to years.

Software-defined storage advocates a new model, where applications and VMs are provisioned at the time that the user needs them. The storage resources that they need are provisioned on-demand, exactly for what the application and the user needs -- nothing more or less.

The idea is that you do this in a way that is really intuitive to the end-user, in a way that reflects the abstractions that user understands -- applications, the data containers that the applications need, and the characteristics of the application workloads.


So those two aspects of flexibility are the two fundamental aspects of any software-defined storage.

Gardner: As we see this increased agility, flexibility, the on-demand nature of virtualization now coupled with software-defined storage, how are organizations benefiting at a business level?

Farronato
Farronato: There are several benefits and several outcomes of adopting software-defined storage. The first that I would call out is the ability to be much more responsive to the business needs -- and the changing business needs -- in the form of what your application needs faster.

As Christos was saying, in the old model, you had to guess ahead of time what the applications will need, spend a lot of time trying to preconfigure and predetermine the various services levels, performance, availability and other things that our storage really would be required by your application, and so spend a lot of time setting things up, and then hopefully, down the line, consume it the way you thought you would.

Difficult change management

In many cases, this causes long provisioning cycles. It causes difficult change management after you provision the application. You find that you need to change things around, because either the business needs have changed or what you guessed was wrong. For example, customers have to face constant data migration.

With the policy-driven approach that Christos has just described -- with the ability to create these storage services on-the-fly for a policy approach -- you don’t have to do all that pre-provisioning and preconfiguring. As you create the VMs and specify the requirements, the system responds accordingly. When you have to change things, you just modify the policy and everything in the underlying infrastructure changes accordingly.

Responsiveness, in my opinion, is the one biggest benefit that IT will deliver to the business by shifting to software-defined storage. There are many others, but I want to focus on the most important one.
When you have to change things, you just modify the policy and everything in the underlying infrastructure will change accordingly.

Gardner: Can you explain what happens when software-defined storage becomes strategic at the applications level, perhaps with implications across the entire data lifecycle?

Karamanolis: One thing we already see, not only among VMware customers, but as a more generic trend, is that infrastructure administrators -- the guys who do the heavy-lifting in the data centers day in and day out, who manage much more beyond what is traditionally servers and applications -- are getting more and more into managing networks and data storage.

Find SDS technical insights and best practices on the VSAN storage blog.

Talking about changing models here, what we see is that tools have to be developed and software-defined storage is a key technology evolution behind that. These are tools for those administrators to manage all those resources that they need to make their day-to-day jobs happen.

Here, software-defined storage is playing a key role. With technology like Virtual SAN, we make the management of storage visible for people who are not necessarily experts in the esoterics of a certain vendor's hardware. It allows more IT professionals to specify the requirements of their applications.

Then, the software storage platform can apply those requirements on the fly to provision, configure, and dynamically monitor and enforce compliance for the policy and requirements that are specified for the applications. This is a major shift we see in the IT industry today, and it’s going to be accelerated by technologies like Virtual SAN.

Gardner: When you go to software-defined storage, you can get to policy level, automation, and intelligence when it comes to how you're executing on storage. How does software-defined storage simplify storage overall?

Distributed platform

Karamanolis: That's an interesting point, because if you think about this superficially, we’ll now go from a single, monolithic storage entity to a storage platform that is distributed, controlled by software, and can span tens or sometimes hundreds of physical nodes and/or entities. Isn’t complexity harder in the latter case?

The reality is that whether it's because of necessity or because we've learned a lot over the last 10 to 15 years about how to manage and control large distributed systems, that there is a parallel evolution of these ideas of how you manage your infrastructure, including the management of storage.
The user has to be exposed to the consequences of the policy they choose. There is a cost there for every one of those services.

As we alluded to already, the fundamental model here is that the end user, the IT professional that manages this infrastructure, expresses in a descriptive way, what they need for their applications in terms of CPU, memory, networking, and, in our case, storage.

What do I mean by descriptive? The IT professional does not need to understand all the internal details of the technologies or the hardware used at any point in time, and which may evolve over a period of time.

Instead, they express at a high level a set of requirements -- we call them policies -- that capture the requirements of the application. For example, in the case of storage, they specify the level of availability that is required for certain applications and performance goals, and they can also specify things like the data protection policies for certain data sets.


Of course, for all those things, nothing comes for free. So the user has to be exposed to the consequences of the policy that they choose. There is a cost there for every one of those services.

But the key point is that the software platform automatically configures the appropriate resources, whether they're arrayed across multiple physical devices, arrayed across the network, or whether they get asynchronous data as specified in a remote location in order to comply with certain disaster recovery (DR) policies.

All those things are done by the software, without the user having to worry about whether the storage underneath is highly available storage, in which case they need to be able to create only two copies of the data, or whether it is of some low-end hardware for which that would require three or four copies of the data. All those things are determined automatically by the platform.

This is the new mode. Perhaps I'm oversimplifying some of these problems, but the idea is that the user should really not have to know the specific hardware configurations of a disk array. If the requirements can not be met, it is because these new technologies are not incorporated into the storage platform.

Policy driven

Farronato: Virtual SAN is a completely policy-driven product, and we call it VM-centric or application-centric. The whole management paradigm for storage, when you use Virtual SAN, is predicated around the VM and the policies that you create and you assign to the VMs as you create your VMs, as you scale your environment.

One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store. In the past, you had to create individual LUNs or volumes, assign data services like replication or RAID levels to each individual volume, and then map the application to them.

With Virtual SAN, you're simply going to have a capacity container that happens to be distributed across a number of nodes in your cluster -- and everything that happens from that point on is just dropping your VMs into this container. It automatically instantiates all the data services by virtue of having built-in intelligence that interprets the requirements of the policy.
One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store.

That makes this system extremely simple and intuitive to use. In fact, one of the core design objectives of Virtual SAN is simplicity. If you look at a short description of the system, the radically simple hypervisor-converged storage means bringing that idea of eliminating the complexity of storage to the next level.

Gardner: We've talked about simplicity, policy driven, automation, and optimization. It seems to me that those add up very quickly to a fit-for-purpose approach to storage, so that we are not under-provisioning or over-provisioning, and that can lead to significant cost-savings.

So let’s translate this back to economics. Alberto, do you have any thoughts on how we lower total cost of ownership (TCO) through these SDS approaches of simplicity, optimization, policy driven, and intelligence?


Farronato: There are always two sides of the equation. There is a CAPEX and an OPEX component. Looking at how a product like Virtual SAN reduces CAPEX, there are several ways, but I can mention a couple of key components or drivers.

First, I'd call out the fact that it is an x86 server-based storage area network (SAN). So it leverages server-side components to deliver shared storage. By virtue of using server-side resources right off the bat there are significant savings that you can achieve through lower-cost hardware components. So the same hard drive or solid-state drive (SSD) that you deploy on a shared external storage array could be on the order of 80 percent cheaper.

The other aspect that I would call out that reduces the overall CAPEX cost is more along the lines of this, as you said, consume on-demand approach or, as we put it in many other terms, grow-as-you-go. With a scale-out model, you can start with a small deployment and a small upfront investment.

You can then progressively scale out as your environment grows by the much finer granularity that you would with a monolithic array. And as you scale, you scale both compute, but also IOPs  and that goes hand in hand with often the number of VMs that you are running out of your cluster.

System growth
 
So the system grows with the size of your environment, rather than requiring you to buy a lot of resources upfront that many times remain under-utilized for a long time.

On the OPEX side, when things become simpler, it means that overall administration productivity increases. So we expect a trend where individual administrators will be able to manage a greater amount of capacity, and to do so in conjunction with management of the virtual infrastructure to achieve additional benefits.

Gardner: Virtual SAN has been in general availability now for several months, since March 2014, after being announced last year at VMworld 2013. Now that it’s in place and growing in the market, are there any unintended benefits or unintended consequences from that total-cost perspective in real-world day-in, day-out operations?
The system grows with the size of your environment, rather than require you to buy a lot of resources upfront that many times remain under-utilized for a long time.

I'm looking for ways in which a typical organization is seeing software-defined storage benefiting them culturally and organizationally in terms of skills, labor, and that sort of softer metric.

Karamanolis: That’s a very interesting point. Our technologists sometimes tend to overlook the cultural shifts that technology causes in the field. In the case of Virtual SAN, we see a lot of, as one customer put it, being empowered to manage their own storage, in the vertical that we are controlling in their IT organization, without having to depend on the centralized storage organization in this company.

Find SDS technical insights and best practices on the VSAN storage blog.

What we really see here is a shift in paradigm about how our customers use Virtual SAN today to enable them to have a much faster turnaround for trying new applications, new workloads, and getting them from test and dev into production without having to be constrained by the processes and the timelines that are imposed by a central storage IT organization.

This is a major achievement, and the major tool for VMware administrators in the field, which we believe is going to lead the way to a much wider adoption of Virtual SAN and software-defined storage in general.

Gardner: How does this simplification and automation have a governance, risk, and compliance (GRC) benefit?

Farronato: With this approach you have a more granular way to control the service levels that you deliver to your customers, to your internal customers, and a more efficient way to do it by standardizing through polices rather than trying to standardize service levels over a category of hardware.

Self-service consumption

You can more easily keep track of what each individual application is receiving, whether it’s in compliance to that particular policy that you specified. You can also now enable self-service consumption more easily and effectively.

We have, as part of our Policy-Based Management Engine, APIs that will allow for integration with cloud automation frameworks, such as vCloud Automation Center for OpenStack, where end users will be able to consume a predefined category of service.

It will speed up the provisioning process, while at the same time, enabling IT to maintain that control and visibility that all the admins want to maintain over how the resources are consumed and allocated.
You can also now enable self-service consumption more easily and effectively.

Gardner: I suppose there are as many on-ramps to software-defined data center as there are enterprises. So it's interesting that it can be done at that custom level, based on actual implementation, but also have a strategic vision or a strategic architectural direction. So, it's future-proof as well as supporting legacy.

How about some examples? Do we have either use-case scenarios or an actual organization that we can look to and say that they have deployed these VSAN and they have benefited in certain ways and they are indicative of what others should expect? 

Farronato: Let me give you some statistics and some interesting facts. We can look at some of the early examples where, in the last three months since the product has become available, we've found a significant success already in the marketplace, with a great start in terms of adoption from our customers.

Find SDS technical insights and best practices on the VSAN storage blog.

We already have more than 300 paying customers in just one quarter. That follows the great success of the public beta that ran through the fall and the early winter with several thousand customers testing and taking a look at the product. 

We are finding that virtual desktop infrastructure (VDI) is the most popular use case for Virtual SAN right now. There are a number of reasons why Virtual SAN fits this model from the scale out, as well as the fact that the hyper-converged storage architecture is particularly suitable to address the storage issues of a VDI deployment.

DevOps, or if you want, preproduction environments, loosely defined as test dev, is another area. There are disaster recovery targets in combination with vSphere Replication and Site Recovery Manager. And some of the more aggressive customers are also starting to deploy it in production use cases.
In the last three months since the product has become available, we've found a significant success already in the marketplace.

As I said, the 300 customers that we already have span the gamut in terms of size and names. We have large enterprises, banking, down to the smaller accounts and companies, including education or smaller SMBs. 

There are a couple of interesting cases that we'll be showcasing at VMworld 2014 in late-August. If you look at the session list, they're already available as actual use cases presented by our customers themselves.

Adobe will be talking about their massive implementation of Virtual SAN. And for their our production environment, on their data analytics platform, there will be another interesting use case with TeleTech talking about how they have leveraged Cisco UCS to progress VDI deployments.

VDI equation

Gardner: I'd like to revisit the VDI equation for a moment, because one of the things that’s held people up is the impact on storage, and the costs associated with the storage to support VDI. But if you're able to bring down costs by 50 percent, in some cases, using software-defined storage. That radically changes the VDI equation. Isn’t that the case, Christos, where you can now say that you can do VDI cheaper than almost any other approach to a virtualized desktop?

Karamanolis: Absolutely, and the cost of storage is the main impediment in organizations to implement a VDI strategy. With Virtual SAN, as Alberto mentioned earlier, we provide a very compelling cost proposition, both in terms of the capacity of the storage, as well as the performance you gain out of the storage.
You get the needs, both capacity and performance of your VDI workloads for a fraction of the cost you would pay for with a traditional disk array storage.

Alberto already touched on the cost of the capacity, referring to the difference in prices one can get from server vendors and from the market, as opposed to single hardware being procured as part of a traditional disk array.

I'd like to touch on something that is an unsung hero of Virtual SAN and of VDI deployment especially, and that's performance. Virtual SAN, as should be clear by now, is a storage platform that is strongly integrated with our hypervisor. Specifically, the data path implementation and the distributed protocols that are implemented in Virtual SAN are part of the ESXi kernel.

That means that, because of that, we can actually achieve very high performance goals, while we minimize the CPU cycles that are consumed to serve those high I/Os per second. What that means, especially for VDI, is that we use a small slice of the CPU and memory of every single ESXi host to implement this distributed software-driven storage controller.


It doesn't affect all the VMs that run on the same ESXi host, who have already published extensive and detailed performance evaluations, where we compare VDI deployments only on Virtual SAN versus using an external disk array.

And even though Virtual SAN use percentage is cut to be 10 percent of local CPU and memory on those hosts, the consolidation ratio, the number of virtual desktops we run on those clusters, is virtually unaffected, while we get the full performance that is realized with an external, all-flash disk array. So this is the value of Virtual SAN in those environments.

Essentially, you get the needs, both capacity and performance of your VDI workloads, for a fraction of the cost you would pay for with a traditional disk array storage.

Gardner: We're only a few weeks from VMworld 2014 in San Francisco, and I know there's going to be a lot of interest in mobile and in desktop infrastructure for virtualized desktops and applications.

Do you think that we can make some sort of a determination about 2014? Maybe this is the year that we turn the corner on VDI, and that that is a bigger driver to some of these higher efficiencies. Any closing thoughts on the vision for software-defined data center and VDI and the timing with VMworld. Alberto?

Last barrier

Farronato: Certainly, one of the goals that we set ourselves for this Virtual SAN release, solving the VDI use case, eliminating probably the last barrier, and enabling a broader adoption of VDI across the enterprise, and we hope that will materialize. We're very excited about what the early findings show.

With respect to VMworld and some of the other things that we'll be talking about at the conference with respect to storage, we'll continue to explain our vision of software-defined storage, talk about the Virtual SAN momentum, some of the key initiatives that we are rolling out with our OEM partners around things such as Virtual SAN Ready Nodes.

We're going to talk about how we will extend the concept of policy management and dynamic composition of storage services to external storage, with a technology called Virtual Volumes.

There are many other things, and it's gearing up to be a very exciting VMworld Conference for storage-related issues.


Gardner: Last word to you, Christos. Do you have any thoughts about why 2014 is such a pivotal time in the software-defined storage evolution?

Karamanolis: I think that this is the year where the vision that we've been talking about, us and the industry at large, is going to become real in the eyes of some of the bigger, more conservative enterprise IT organizations.

With Virtual SAN from VMware, we're going to make a very strong case at VMworld that this is a real enterprise-class storage system that's applicable across a very wide range of use cases and customers.

With actual customers using the product in the field, I believe that it is going to be a strong evidence for the rest of the industry that software-defined storage is real, it is solving real world problems, and it is here to stay.

Together with opening up some of the management APIs that Virtual SAN uses in VMware products to third parties through this Virtual Volumes technology that Alberto mentioned, we'll also be initiating an industry-wide initiative of making, providing, and offering software-defined storage solutions beyond just VMware and the early companies, mostly startups so far, that have been adopting this model. It’s going to become a key industry direction.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Thursday, July 31, 2014

Advanced cloud service automation eases application delivery for global service provider NNIT

As a provider of both application development management and infrastructure outsourcing, Denmark-based NNIT needed a better way to track, manage and govern the more than 10,000 services across its global data centers.

Beginning in 2010, the journey to better overall services automation paved the way to far stronger cloud services delivery, too. NNIT uses HP Cloud Service Automation (CSA) to improve their deployment of IT applications and data, and to provide higher overall service delivery speed and efficiency.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how services standardization leads to improved cloud automation, BriefingsDirect spoke with Jesper Bagh, IT Architect and cloud expert at NNIT, based in Copenhagen. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your company and what you do. Then, we’ll get into some of the services delivery problems and solutions that you've been tasked with resolving.

Bagh: NNIT is a service provider located in Denmark. We have offices around the world, China, Philippines, Czech Republic, and the United States. We’re 2,200 employees globally and we're a subsidiary of Novo Nordisk, the pharmaceutical company.

Bagh
My responsibility is to ensure for the company that business goals can be delivered through functional requirements, and in turning the functional requirements into projects that can be delivered by the organization.

We’re a wall-to-wall, full-service provider. So we provide both application development management and infrastructure outsourcing. Cloud is just one aspect that we’re delivering services on. We started off by doing service-portfolio management and cataloging of our services, trying to standardize the services that we have on the shelf ready for our customers.

That allowed us to then put offerings into a cloud, and to show the process benefits of standardizing of services, doing cloud well, and of focusing on the dedicated customers. We still have customers using our facility management who are not able to leverage cloud services because of compliance or regulatory demands.

We have roughly over 10,000 services in our data centers. We’re trying now to broaden the capabilities of cloud delivery to the rest of the infrastructure so that we get a more competitive edge. We’re able to deliver better quality, and the end users -- at the end of the day -- get their services faster.
Back in the good old days, developers were in one silo and operations were in another silo. Now, we see a mix of resources, both in operations and in development.

Full suite

We embarked on CSA together with HP back in 2010. Back then, CSA consisted of many different software applications. It wasn't really complete software back then. Now, it’s a full suite of software.

It has helped us to show to our internal groups -- and our customers -- that we have services in the cloud. For us it has been a tremendous journey to show that you can deliver these services fully automatically, and by running them well, we can gain great efficiency.

Gardner: How has this benefited your speed-to-value when it comes to new applications?

Bagh: The adoption of automation is an ongoing journey. I imagine other companies have also had the opportunity of adopting a new breed of software, and a new life in automation and orchestration. What we see is that the traditional operations divisions now suddenly get developers trying to comprehend what they mean, and trying to have them work together to deliver operations automatically.

Back in the good old days, developers were in one silo, and operations were in another silo. Now, we see a mix of resources -- both in operations and in development. So the organizational change management derived from automation projects is key. We started up, when we did service cataloging and service portfolio management, by doing organizational change to see if this could fit into our vision.

Gardner:  Now, a lot of people these days like to measure things. It’s a very data-driven era. Have you been able to develop any metrics of how your service automation and cloud-infrastructure developments have shown results, whether it’s productivity benefits or speeds and feeds? Have you measured this as a time-to-value or a time-to-delivery benefit? What have you come up with?

Value-add

Bagh: As part of the cloud project, we did two things. We did infrastructure as a service (IaaS), but we also did a value add on IaaS. We were able to deliver qualified IaaS to the life science industry fully compliant. That alone, in the traditional infrastructure, would have taken us weeks or months to deliver servers because of all the process work involved. When we did the CSA and the GxP Cloud, we were able to deliver the same server within a matter of hours. So that’s a measurable efficiency that is highly recognized.

Gardner:  For other organizations that are also grappling with these issues and trying to go over organization and silo boundaries for improvement in collaboration, do you have any words of advice? Now that you've been doing this for some time and at that key architect level, which I think is really important, what thoughts do you have that you could share with others, lessons learned perhaps?

Bagh: The lesson learned is that having senior management focus on the entire process is key. Having the organization recognized is a matter of change management. So communication is key. Standardization before automation is key.

You need to start out by doing your standardization of your services, doing the real architectural work, identifying which components you have and which components you don't have, and matching them up. It’s trying to do all the Lego blocks in order to build the house. That’s key. The parallel that I always use is there is nothing different for me as an architect than there is for an architect building a house.
The next step for us is to be more proactive than reactive in our monitoring and reporting capabilities, because we want to be more transparent to our customers.

Gardner:  Looking to the future, are there other aspects of service delivery, perhaps ways in which you could gather insights into what's happening across your infrastructure and the results, that end users are seeing through the applications? Do you have any thoughts about where the next steps might be?

Bagh: The next step for us is to be more transparent to our customers. So the vision is now we can deliver services fully automatically. We can run them semi-automatically. We will still do funny stuff from time to time that you need to keep your eyes on. But in order for us to show the value, we need to report on it.

The next step for us is to be more proactive than reactive in our monitoring and reporting capabilities, because we want to be more transparent to our customers. We have a policy called Open and Honest Value-Adding. From that, we want to show our customers that if we can deliver a service fully automatically and standardized, they know what they get because they see it in a catalog. Then, we should be able to report on it live for the users.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, July 30, 2014

More than just an IT shift, cloud fuels the new engine of business innovation, says Oxford Economics survey

Over the past five years, the impetus for cloud adoption has been primarily about advancing the IT infrastructure-as-a-service (IaaS) fabric or utility model, and increasingly seeking both applications and discrete IT workload support services from Internet-based providers.

But as adoption of these models has unfolded, it's become clear that the impacts and implications of cloud commerce are much broader and much more of a benefit to the business as a whole as an innovation engine, even across whole industries.

Recent research shows us that business leaders are now eager to move beyond cost and efficiency gains from cloud to reap far greater rewards, to in essence rewrite the rules of commerce.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Our latest BriefingsDirect discussion therefore explores the expanding impact that cloud computing is having as a strategic business revolution -- and not just as an IT efficiency shift. Join a panel of experts and practitioners of cloud to unpack how modern enterprises have a unique opportunity to gain powerful new means to greater business outcomes.

Our panelists are: Ed Cone, the Managing Editor of Thought Leadership at Oxford Economics; Ralf Steinbach, Director of Global Software Architecture at Groupe Danone, the French food multinational based in Paris; Bryan Acker, Culture Change Ambassador for the TELUS Transformation Office at TELUS, the Canadian telecommunications firm, and Tim Minahan, Chief Marketing officer for SAP Cloud and Line of Business Solutions. The panel is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What has the research at Oxford Economics been telling you about how cloud is reshaping businesses?

Cone: We did a survey for SAP last year, and that became the basis for this program. We went out to 200 executives around the world and asked them, "What are you doing in the cloud? Are you still looking at it for just process speed, efficiency, and cost cutting?"

Cone
The numbers that came back were really strong in terms of actually being a part of the business function. Beyond those basics, cloud is very much part of the daily reality of companies today.

We saw that the leading expectation for cloud to deliver significant improvement was in productivity, innovation, and revenue generation. So obviously process, speed, efficiency, and cost cutting are still very important to business, but people are looking to cloud for new lines of business, entering new markets, and developing new products.

In this program, what we did was take that information and go out to executives for live interviews to dive deep into how cloud has become the new engine of business, how these expectations are being met at companies around the world.

Gardner: Are businesses doing this intentionally, or are they basically being forced by what's happening around them?

Minahan: Increasingly, as was just indicated, businesses are moving beyond the IT efficiencies and the total cost of ownership (TCO) benefits of the cloud, and the cloud certainly offers benefits in those areas.

Minahan
But really what's driving adoption, what's moving us to this tipping point, is that now, by some estimates, 75 percent of all new investments are going into the cloud or hybrid models. Increasingly, businesses are viewing the cloud as a platform for innovation and entirely new engagement models with their customers, their employees, their suppliers and partners, and in some cases, to create entirely new business models.

Just think about what cloud has done for our personal lives. Who would have thought that Apple, a few years ago, would be used to run your home. This is the Apple Home concept that allows you to monitor and manage all of your devices -- your air-conditioning, your alarm, music, and television -- remotely through the cloud.

There's even the quasi business B2B and B2C models around crowd sourcing and crowd funding from folks like Kickstarter or payment offerings like Square. These are entirely new engagement models, new business models that are built on the back of this emergence of cloud, mobile, and social capabilities.

Gardner: Right, and it seems that one of these benefits is that we can cross boundaries of time, space geography, what have you, very easily, almost transparently, and that requires new thinking in order to take advantage of that.

Bryan, at TELUS as Culture of Change Ambassador, are you part of the process for helping people think differently and therefore be able to exploit what cloud enables?

Flexible work schedule

Acker: One hundred percent. It's actually a great segue, because at TELUS we have a flexible work arrangement, where we want 70 percent of our employees to be working either from home or remotely. What that means is we have to have the tools and the culture in place that people understand, that they can access data and relevant information, wherever they are.

Acker
It doesn't matter if they're at home, like I am today, on the road, or at a client site, they need to be able to get the information to provide the best customer experience and provide the right answer at the right time.

So by switching from some of the great tools we already offered, because collaboration is part of TELUS’s cultural DNA, we've actually been able to tear down silos we didn't even know we were creating.

We were trying to provide all the tools, but now people have an end-to-end view of every record for customers, as well as employees and the collaboration involving courses and learning opportunities. They have access to everything when they need it and they can take ownership of the customer experience or even their own career, which is fantastic for us.

Gardner: Ralf, at Danone, as Director of Global Software Architecture, you clearly have your feet on the IT path and you've seen how things have evolved. Do you see the shift to cloud as a modest evolution, or is this something that changes the game?

Steinbach: We've been looking at cloud for quite sometime now. We've started several projects in the cloud, mainly in two areas. One involves the supporting functions of our business which is HR, travel expenses, and mail. There, we see a huge advantage of using standardized services in the cloud.

Steinbach
In these functions we do not need any specifics. The cloud comes standard and you can not change, as you can with SAP systems. You can't adapt the code. But that is one area where we think there's value in using cloud applications.

The other area where we really see the cloud as valued is in our digital marketing initiatives. There, we really need the flexibility of the cloud. Digital marketing is changing every day. There's a lot of innovation there and there the cloud gives us flexibility in terms of resources that we need to support that. And, the innovation cycles of our providers are much faster than they would be on premises. These are the two main areas where we use the cloud today.

Cone: Ralf, it was interesting to me, when I was reading through the transcript of your interview and working on the case studies we did, that it is even changing business models. It's allowing Danone to go straight to the consumer, where previously your customer had been the retailer. Cloud in new geographic markets is letting you reach straight to the end user, the end buyer.

Digital marketing

Steinbach: That's what I meant when I talked about digital marketing. Today, all consumer product goods company like Danone  are looking at connecting to their consumers and not to the retailers as in the past. We're really focusing on the end-consumer, and the cloud offers us new possibilities to do that, whether it is via mobile applications or websites and so on.

One thing that's important is the flexibility of the systems, because we don't know how many consumers we'll address. It can be a few, but it could be over a million. So we need to have a flexible architecture, and on-premise we could not manage that.

Gardner: The concept of speed seems to come up more and more. We're talking about speed of innovation, agility, direct lines of communication to customers and, of course, also supply-chain direct communication speed as well. How prominent did you see speed and the need for speed in business in your recent research?
We're really focusing on the end consumer, and the cloud offers us new possibilities to do that.

Cone: Well, speed was important -- and it's speed across different dimensions. It's speed to enter a new market or it's speed to collaborate within your own company, within your own organization.

This idea of taking IT and pushing it out to the people, to the customer, and really to the line of business allows them to have intimate contact and to move quickly, but also to break down these barriers of geography.

We did a case study with another large company, Hero, which is a large maker of motorcycles and two wheeled vehicles in India. What they're doing with cloud- enabled customer-facing technology is moving their service operation outside of dealerships into the countryside, out across India. They go to parks and they set up what they call service camps.

There, the speed element is the speed and the convenience with which you are able to get your bike serviced, and that's having a large measurable impact on their business. So it is speed, but it is speed across multiple dimensions.

New innovation

Minahan: At the core, the cloud is really all about unlocking new innovations, providing agility in the business, allowing companies to be able to adapt their processes very, very quickly, and even create entirely new engagement models, and that's what we are seeing.

It is not just the cloud, though. This convergence of cloud, big data, analytics, mobile and social, and business networks really ushers in ultimately a new paradigm for business computing, one where applications are no longer just built for enterprise compliance or to be the system of record. Instead, they're really designed to engage and empower the individual user.

It's one that ushers in a new era of innovation for the business, where we can enable new engagement models with customers, employees, suppliers, and other partners.

We've heard some great examples here, but some others were very similar to the experience that Danone has seen. T-Mobile is leveraging the cloud not to replace its traditional systems of records, but to extend them with the cloud, to create a new model for social care, helping monitor conversations on its brand, and engage customer issues across multiple channels.
This convergence of cloud, big data, analytics, mobile and social, and business networks really ushers in ultimately a new paradigm for business computing.

So not just their traditional support channels, but Twitter and Facebook, where these conversations are happening and really it is empowered them to deliver what has become a phenomenal kind of “Cinderella-worst-to-first” story for customer support and satisfaction.

Now, they're seeing first time resolution rates that have gone from the low teens to greater than 94 percent. Obviously, that has a massive impact on customer satisfaction and renewals and is all powered by not throwing out the systems that they've used so long, but by extending them with the cloud to achieve new innovations and then drive new engagement models.

Gardner: Tim, another factor here, in a sense, levels the playing field. When you move to the cloud, small-to-medium-sized enterprises (SMBs) can enjoy the same benefit that you just described for example from T-Mobile. Are you at SAP seeing any movement in terms of the size or type of organizations that can exploit these new benefits?

Minahan: What's interesting, Dana, is that you and I have been around this industry for quite some time and the original thought was that the cloud was the big, democratized computing power.

It allowed SMBs to get the same level of applications and infrastructure support that their larger competitors have had for years. That's certainly true, but it is really the large enterprises that have been aggressively adopting this on an equal pace with their SMBs.

All sizes of companies

The cloud is being used to not only accelerate process efficiency and productivity, but to unlock innovations for all sized companies. Large enterprises like UPS, Deutsche Bank, and Danone are using cloud-based business applications. In the case of UPS and Deutsche Bank, they're using business networks to extend their traditional supply chain and financial systems to collaborate better with their suppliers, bankers, and other partners.

It's being used by small upstarts as well. These are companies that we talked about in the past like Mediafly, a mobile marketing start-up. It's using dynamic discounting solutions in the cloud to get paid faster, fund development of new features, and take on new business.

There's Sage Health Solutions, a company started by two stay-at-home moms in South Africa that is really grown from zero to a multi-million dollar operation. That is all powered by the leveraging the cloud to enable new business models.

Cone: To follow on with what Tim said about the broad gamut of usage from company sites and also earlier mentioning mobile, what we saw in our survey is that mobile is of great importance to companies as a way of reaching their customers for internal productivity as well. But reaching customers is actually a higher priority and that comes down to the old adage: You have to fish where the fish are.
The cloud is being used to not only accelerate process efficiency and productivity, but to unlock innovations for all sized companies.

Look at what Danone is doing when they're setting up direct-to-customer technologies and marketing. They're going into markets where people don't necessarily have laptops or landlines. They're leapfrogging that to a world where people have mobile devices.

So if you have mobile customers, and as Tim said, think of the consumer experience, that is how we all live our lives now. No matter what size your company is, you have to reach your customers the way your customer lives now -- and that is mobile.

Gardner: Tell us a little bit about your research, how you have gone about it, and how that new level of pervasive collaboration was demonstrated in your findings.

Baseline information

Cone: In terms of the research, as I said, we went out to 200 execs around the world and asked them a series of questions about what their investment plans were. It was baseline survey information. What are you doing in the cloud, how much of it are you doing, and what are the key benefits that you're getting?

Then, as we went deeper in this phase of the project, we found that collaboration has different meanings. It can be collaboration within the company. It can be with partners, which cloud platforms allow you to do more easily. It's also this key relationship, a key area of collaboration between IT and the business.

What we see in this research is that IT is increasingly seen as a partner for the business as a way of driving revenue via the cloud. But across the four regions that we surveyed --  North America, Latin America, EMEA and APAC -- we saw a very high percentage of companies say that they see that IT is emerging as a valued partner of the business, not just a support function for the business. I think that's a key collaborative relationship that I'm sure our guests are seeing in their own companies.

Gardner: Just to be clear, Ed, this is ongoing research. You're already back in the field and you'll be updating some of these findings soon?
We're really interested to see how people are doing compared to the targets they set and what their new targets are.

Cone: Yes, we're really excited about that, Dana. We did this survey last year for SAP. Then, we jumped in about a year later using those numbers and did these in-depth research interviews to look at the use of the cloud to drive business. This summer, we're refielding the survey to see how things have changed and to see how the view of the future has changed.

We ask a lot of questions about where they are now, and where they think they'll be in three years. We're really interested to see how people are doing compared to the targets they set and what their new targets are. So we will have some fresh numbers and fresh reports to talk to you about by Q3 or Q4.

Gardner: Let us look into those actual examples now and go back to Bryan at TELUS.

Acker: I have a tangible example that might help express the value of collaboration at TELUS and something that people don't think about, and that is safety.

We have a lot of field technicians who are in remote areas, but have mobile access. A perfect example is that we can go into situation where a technician may be a little unsure of what to do in a situation and it's potentially unsafe.

Because of the mobile access and the cloud, we've enabled them to quickly record a video, upload it directly to our SAP Jam system, which is our collaborative tool suite that we use, and share it with a collection of other technicians, not just the person they can call.

Safer situation

What happens is then people can say this is unsafe, you need to do X, Y and Z. We can even push them required training, so they can be sure that they're making the right decision. All of a sudden, that becomes a safer situation and the technician is not putting themselves at risk. This is really important because people do not think of those real, tangible examples. They often feel that they're just sharing information back and forth.

But in terms of what we are doing and where we are going, I sit in HR, and we're trying to improve the business process. We now have all of our information, the system of record, an integrated learning management system (LMS), our ability to analyze talent, so we make the correct hires.

We now trust the information implicitly and we're able to make the correct decision, whether it means customer information, recruiting choices, hiring choices, or performance choices.

Now, we're in a situation where we're only going to maximize and try to leverage the cloud for even more innovation, because now people are singing from the same choir sheet, so to speak.
We now trust the information implicitly and we're able to make the correct decision, whether it means customer information, recruiting choices, hiring choices, or performance choices.

We have access to the same system or record of truth, and that's the first time we've had that. Now, recruiting can talk to learning, who can talk to  performance, who can talk to technicians and we know they all get a consistent version of the truth. That is really important for us.

Gardner: Those are some excellent examples of how mobile enhances cloud. That extends the value of mobile. That brings in collaboration and, at the same time, creates data and analysis benefits that can then be fed back into that process.

So there really is a cyclical adoption value here. I'd like to go back to the cultural part of this. Bryan, how do you make sure that that adoption cycle doesn't spin out of control? Is there a lack of governance? Do you feel like you can control what goes on, or are we perhaps in the period of creative chaos that we should let spin off on its own in any way?

Acker: That’s a great question, and I'm not sure if TELUS handles this in a unique way, but we definitely had a very detailed plan. The first thing we did was have collaboration as one of our valued attributes or one of our leadership competencies. People are expected to collaborate, and their performance review is dependent on that.

What that means is we can provide tools to say that we're trying to facilitate collaboration. It doesn't mean matter if you're collaborating through a phone call, through a water-cooler chat, or through technology. Our employees are expected to collaborate. They know that it’s part of their performance cycle and it’s targeted towards their achievements for the year. We trust them to do the right thing.

We actually encourage a little bit of freedom. We want to push the boundaries. Our governance is not so tight that they are afraid to comment incorrectly or afraid to ask a tough question.

Flattening the hierarchy

What we're seeing now is individual team members are challenging leadership positions on specific questions, and we're having an honest and frank discussion that’s pushing the organization forward and making us make the accurate correct choice at all time, which is really encouraging. Now, we're really flattening our hierarchy and the cloud is enabling us to do that.

Gardner: That sounds like a very powerful engine of innovation, allowing that freedom, but then having it be controlled, managed, and understood at the same time. That’s amazing. Ed, do you have any reactions to what Bryan just said about how innovation is manifesting itself newly there at TELUS?

Cone: When we spoke to TELUS, I was intertested in that cultural aspect of it. I'm sure the guys on the call would disagree with me on a technical level, but we like to say that technology is easy, and culture is hard. The technology works, and you implement it and you figure that out, but getting people to change is really difficult.

The example that we use in the case study, SAP on TELUS, was about changing culture through gamification, allowing people to learn via an online cloud-based virtual game. It was this massive effort and it engaged a huge number of employees across this large company.
It really shifted the employee culture, and that had an impact on customer service and therefore on business performance

It really shifted the employee culture, and that had an impact on customer service and therefore on business performance. It’s a way that the cloud is moving mountains and it’s addressing the hard thing to change, which is human behavior and attitudes.

Minahan: We talk about the convergence of these different technologies in cloud, social, and mobile and ultimately we had this convergence going on in technology that we talked about all the time. There is massive change going on in the workforce and what constitutes the workforce.

Bryan talked about how there is a leveling of the organization, doing away with the traditional hierarchical command and control, where information is isolated in the hands of a few, and the new eager employees doesn’t get access to solving some of the tough problems. All that’s being flattened and accelerated and powered by cloud and social collaboration tools.

Also, we're seeing a shift in what constitutes the workforce. One of the biggest examples is the major shift in how companies are viewing the workforce. Contingent and statement of work (SOW) workers, basically non-payroll employees, now represent a third of the typical workforce. In the next few years, this will grow to more than half.

It’s already occurring in certain industries, like pharmaceuticals, mining, retail, and oil and gas. It's changing how folks view the workforce. They're moving from a functional management of someone -- this is their job; this is what they do -- to managing pools of talent or skills that can be rapidly deployed to address a given problem or develop a new innovative product or service.

These pools of talent will include both people on your payroll and off your payroll. Tracking, managing, organizing, and engaging these pools of talent is only possible through the cloud and through mobile, where multiple parties from multiple organizations could view, access, collaborate, and share knowledge and experiences running on a shared-technology platform.

Customer is evolving

Acker: That extends quite naturally to the customer. The customer is evolving faster than almost anything and they expect 24x7 access to support. They expect authentic responses and they now have access to just as much information as the customer service agent.

Without mobile, if you can't connect with those customers and be factual, you're in trouble. Your customers are going to reply in social-media channels and in public forums, and you're going to lose business and you're going to lose trust with your existing customers as well.

Minahan: I fully agree. The only addition to that is that they also expect to be able to engage you through any channel, whether it’s their mobile phone, their laptop, or in some cases, directly face to face, on the phone, or in a retail outlet and have the same consistent experience and not need to reintroduce who they are and what their problem as they move from channel to channel.

Gardner: Clearly we're seeing how things that just weren’t possible before the cloud are having pervasive impacts on businesses. Let’s look at a new business example, again with Danone. Ralf, tell us a little bit about how cloud has had strategic implications for you. You have many brands, many lines of business. How is cloud allowing Danone to function better as a whole?
The cloud is definitely the best option for us to start these new businesses and connect to all consumers. 

Steinbach: We have a strategy around digital marketing and, as you know, we're operating in almost every country in the world. Even though we're a big company, locally, we're sometimes quite small. We're trying to build up new markets in emerging countries with very small investments in the beginning. There, the cloud is definitely the best option for us to start these new businesses and connect to all consumers.

Money matters, even for a big company like Danone. That’s very important for us. If you look at Africa, there are completely different business models that we need to address.

People in Africa pay with their mobile phones. Some sell yogurt on a bicycle. Women pick up some yogurt in the morning and then they sell them on the road. We need to do businesses with these people as well. Obviously, an enterprise resource planning (ERP) system isn't able to do that, but the cloud is a much better adapted platform to do this sort of business.

Gardner: The C-suite likes to look at numbers. How do we measure innovation?

Metrics lacking

Cone: We're doing some research on another program right now on that very topic for a non-SAP program. That is showing us that metrics for success on basic things like key performance indicators (KPIs) for progress of migration into the cloud are lacking at a lot of companies. Basic return on investment (ROI) numbers are lacking at a lot of companies.

We're really old school. To go back to your definition of what a business is, we think it’s an organization that’s set up to make money for shareholders and deliver value for stakeholders. By those measures, at least by dotted line, the key metrics are your financial performance? Are you entering, as we mentioned before, new markets and creating new products?

So the metrics we're seeing that are cloud specific aren't universal yet. In a broader sense, as cloud becomes an everyday set of tools, the point of those tools is to make the business run better, and we are seeing a correlation between effective use of the cloud and business performance.
There are entirely new engagement models and business models that the companies hadn’t even thought of before.

Minahan: What the cloud, mobile, and social bring to bear in addition to new collaboration models is that they kick off an unbelievable amount of new information, and oftentimes not in a structured way. There's a need to aggregate that information and analyze that in new ways to detect and predict propensity modeling on your customers, your supply chain, and your employees. Progression and development are extremely powerful.

I think we’ve just scratched the surface. As an industry, we provided the channels through which to collaborate, as we heard today. There are entirely new engagement models and business models that the companies hadn’t even thought of before. Once you have that information, once you have that connectivity, once you have that collaboration, you can begin to investigate and trial and error.

To answer your question about measurement on this, yes, we need measurement of the business process and the business outcome. Let’s not forget why companies adopt technology. It’s not just for technology sake. It’s to effect the change. It’s to effect more efficiency, greater productivity, and new engagement capabilities.

Measuring the business benefit is what we're seeing and what we’re advising our customers to do. And rather than just measuring, are we tracking towards an adoption of having more cloud in our infrastructure portfolios.

The focus today is largely driven by the fact that the lines of business are now more engaged in the buying decision and in shaping what they want from a technology standpoint to help them enable their business process. So the metrics have shifted from one of speeds and feeds and users to one of business outcomes.

Gardner: Bryan at TELUS in Toronto, you're closely associated with the human resources productivity and the softer metrics of the employee involvement and dedication that sort of thing. Are there any ways that you can think of that cloud adoption and innovation, as we’ve been describing, has this unintended set of consequences when it comes to employee empowerment or that innovation equation? How do you view measuring success of cloud  adoption?

Simplifying the process

Acker: We measure our customers success by the likelihood to recommend. Will a TELUS customer recommend our services and products to friends, family, and peers?

We measure internal success by our employee engagement metric. If the customers are satisfied and the employees are engaged and fulfilled at work, that means that we're probably moving in the right direction. We can kind of reverse engineer to see what changes are helping us. That allows us to take our information and innovation from the cloud and inspire better behaviors and better process.

We can say, "You know what, in this pocket we’ve analyzed that our customers are likely to recommend it higher than anywhere else in Canada. What are they doing?" We can look back through the information shared on the cloud and see the great customer success stories or the great team building that’s driving engagement through the roof.

We can say, "This is the process we have to replicate and spread throughout all of our centers." Then, we can tweak it for cultural specifics. But because of that, we can use the cloud to inspire better behavior, not just say that we had 40,000 users and 2,000 hits on this blog post. We're really trying to get away from the quantitative and get into the qualitative to drive change throughout the organization.

Gardner: What comes next? Where do you see the impacts of cloud adoption in your business over the next couple of years?

Steinbach: There are still some challenges in front of us. One of the challenges is China. China is one of the biggest markets, but cloud services are not always available or they're very slow. If your cloud solution is hosted outside of China, there's a big problem. These are probably technical challenges, but we have to find solutions with our partners there, so that they can establish their services in China.

That’s one of the challenges. The other is that that the cloud might change the role of IT in our organization. In the past we owned the systems and the applications. Today, the business can basically buy cloud services with a credit card. So you could imagine that they won’t need us anymore in the future, but that's not true.

As an IT organization, we probably have to find our role inside the organization, from just providing solutions or hardware to being an ambassador for the business and to help them to make the right decisions. There are still problems that will remain as the integration between different applications. It doesn’t get easier in the cloud, so that’s where I see the challenge.

And last but not least, it's about security. We take that really seriously. If we store data, whether it's employees or of our consumers, we have to make sure that that our cloud providers have the same standards of security and there are no leaks. That’s very, very important for us. And there are legal aspects as well.

We've just started. There are still a lot of things to do in the next few years, but we're definitely going on with our strategy towards the cloud and toward mobile. And, at the end of the day, it all fits together. I think it was said before that it's not only cloud, but it's the big data, collaboration, and mobile. You have to see the whole thing as one package of opportunities.

Important challenges

Gardner: What do you think might be some of the impacts a few years from now that we're only just starting to realize?

Acker: On a more positive note, which is just the other side of the coin, obviously the challenges are there, but we're actually just starting to be able to experience the fact that innovation at TELUS is moving faster than it used to. We're no longer dependent on the speed at which our pre-assigned resources can make change and develop new products.

IT can now look at it from a more strategic point of view, which is great. Now, we're maximizing quarterly releases from systems that are leveraging the input from multiple companies around the world, not just how fast our learning team can develop something or how fast our IT team can build new functionality into our products.

We're no longer limited by the resources, and innovation is flying forward. That, for us, is the biggest unexpected gain. We're seeing all this technology that used to take months or years to change now on a quarterly release schedule. This is fantastic. Even within a year of being on our cloud-computing system, we're so happy, and that is inspiring to people. They're maximizing that and trying to push the organization forward as well. So, that’s a real big benefit.

Gardner: Tim, do you have any thoughts about where this can lead us in the next few years that we haven’t yet hit upon, things you're just starting to see the first really glimmers of it?
I think the biggest thing is that the cloud is going to unlock new business models and new organization models.

Minahan: A lot of it has been touched on here. We're seeing a massive shift in what the role of IT is, moving from one of deploying technology and integrating things to really becoming business process experts.

We talked a bit about the amount of data and the insights that are now available to help you better understand and predict the appetites of your customers to help you even determine when your machines might fail and when it's time to reorder or set a service repair.

I think the biggest thing is that the cloud is going to unlock new business models and new organization models. We talked a bit about TELUS and their work patterns, in which most of the workers are remote and how they are engaging the field service technicians in the field.

We talked about the growing contingent workforce and how the cloud is enabling folks to collaborate, onboard, and skill up those employees, non-payroll employees much more quickly. We're going to see your new virtual enterprises. We're talking about borderless enterprises that allow you to organize not just pools of talent, but entire value chains, and be able to collaborate in a more much transparent way.

We mentioned before about Apple Home. You're beginning to see it with 3D printers. It's this whole idea where more and more companies become digital businesses. This isn’t just about on-the-channel commerce providing a single customer experience across multiple channels.

It's actually about moving more and more of what you deliver, the solutions you deliver, the former products your deliver, to digital bits that can be tested, experienced, and downloaded all online.

All of this is being empowered by this massive convergence of cloud, mobility, social and business networks, and big data. 

What comes next

Cone: To follow on what Tim said about the borderless enterprise, when we ask people what’s in the cloud now and what’s going to be substantially cloud based in three years, three of the highest growth areas were innovation in R and D, supply chain, and HR. All of those go straight to this idea that boundaryless digital enterprises are emerging and that cloud will be the underpinning of these enterprises.

We're working with Tim right now on a big global study about the workforce. When I talk about culture and the way companies function internally, a year ago, when we started this research, HR was the least likely function of the ones we queried to be in the cloud, and it's going to have massive growth in the next couple of years.
These stories start to converge of boundaryless and culture, all coming together via the cloud.

These stories start to converge of boundaryless and culture, all coming together via the cloud. That’s the segue to say that we're really excited to see how these numbers look when we refield this survey this summer, because that progress is snowballing and accelerating beyond even what people thought it was the last time we asked them.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP.

You may also be interested in:

Wednesday, July 23, 2014

How UK data solutions developer Systems Mechanics uses HP Vertica for BI, streaming and data analysis

Three years ago, Systems Mechanics Limited used relational databases to assemble and analyze some 20 different data sources in near real-time. But most relational database appliances used 1980s technical approaches, and the ability to connect more data and manage more events capped off. The runway for their business expansion just ended.

So Systems Mechanics looked for a platform that scales well and provides real-time data analysis, too. At the volumes and price they needed, HP Vertica has since scaled without limit ... an endless runway.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how Systems Mechanics improved how their products best deliver business intelligence (BI), analytics streaming, and data analysis, BriefingsDirect spoke with Andy Stubley, Vice President of Sales and Marketing at Systems Mechanics, based in London. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner:  You've been doing a lot with data analysis at Systems Mechanics, and monetizing that in some very compelling ways.

Stubley: Yes, indeed. System Mechanics is principally a consultancy and a software developer. We’ve been working in the telco space for the last 10-15 years. We also have a history in retail and financial services.

Stubley
The focus we've had recently and the products we’ve developed into our Zen family are based on big data, particularly in telcos, as they evolve from principally old analog conversations into devices where people have smartphone applications -- and data becomes ever more important.

All that data and all those people connected to the network cause a lot more events that need to be managed, and that data is both a cost to the business and an opportunity to optimize the business. So we have a cost reduction we apply and a revenue upside we apply as well.

Quick example

Gardner: What’s a typical way telcos use Zen, and that analysis?

Stubley: Let’s take a scenario where you’re looking in network and you can’t make a phone call. Two major systems are catching that information. One is a fault-management system that’s telling you there is a fault on the network and it reports that back to the telecom itself.

The second one is the performance management system. That doesn’t specify faults basically, but it tells you if you’re having things like thresholds being affected, which may have an impact on performance every time. Either of those can have an impact on your customer, and from a customer’s perspective, you might also be having a problem with the network that isn’t reported by either of the systems.

We’re finding that social media is getting a bigger play in this space. Why is that? Now, particular the younger populations with consumer-based telcos, mobile telcos particularly, if they can’t get a signal or they can’t make a phone call, they get onto social media and they are trashing the brand.

They’re making noise. A trend is combining fault management and performance management, which are logical partners with social media. All of a sudden, rather than having a couple of systems, you have three.

In our world, we can put 25 or 30 different data sources on to a single Zen platform. In fact, there is no theoretical limit to the number we could, but 20 to 30 is quite typical now. That enables us to manage all the different network elements, different types of mobile technologies, LTE, 3G, and 2G. It could be Ericsson, Nokia, Huawei, ZTE, or Alcatel-Lucent. There is an amazing range of equipment, all currently managed through separate entities. We’re offering a platform to pull it all together in one unit.

The other way I tend to look at it is that we’re trying to turn the telcos into how you might view a human. We take the humans as the best decision-making platforms in the world and we probably still could claim that. As humans, we have conscious and unconscious processes running. We don’t think about breathing or pumping our blood around our system, but it’s happening all the time.
We use a solution with visualization, because in the world of big data, you can’t understand data in numbers.

We have senses that are pulling in massive amount of information from the outside world. You’re listening to me now. You’re probably doing a bunch of other things while you are tapping away on a table as well. They’re getting senses of information there and you are seeing, and hearing, and feeling, and touching, and tasting.

Those all contain information that’s coming into the body, but most of the activity is subconscious. In the world of big data, this is the Zen goal, and what we’re delivering in a number of places is to make as many actions as possible in a telco environment, as in a network environment, come to that automatic, subconscious state.

Suppose I have a problem on a network. I relate it back to the people who need to know, but I don’t require human intervention. We’re looking a position where the human intervention is looking at patterns in that information to decide what they can do intellectually to make the business better.

That probably speaks to another point here. We use a solution with visualization, because in the world of big data, you can’t understand data in numbers. Your human brain isn’t capable of processing enough, but it is capable of identifying patterns of pictures, and that’s where we go with our visualization technology.

Gather and use data

We have a customer who is one of the largest telcos in EMEA. They’re basically taking in 90,000 alarms from the network a day, and that’s their subsidiary companies, all into one environment. But 90,000 alarms needing manual intervention is a very big number.
Using the Zen technology, we’ve been able to reduce that to 10,000 alarms. We’ve effectively taken 90 percent of the manual processing out of that environment. Now, 10,000 is still a lot of alarms to deal with, but it’s a lot less frightening than 90,000, and that’s a real impact in human terms.

Gardner: Now that we understand what you do, let’s get into how you do it. What’s beneath the covers in your Zen system that allows you to confidently say you can take any volume of data you want?
If we need more processing power, we can add more services to scale transparently. That enables us to get any amount of data, which we can then process.

Stubley: Fundamentally, that comes down to the architecture we built for Zen. The first element is our data-integration layer. We have a technology that we developed over the last 10 years specifically to capture data in telco networks. It’s real-time and rugged and it can deal with any volume. That enables us to take anything from the network and push it into our real-time database, which is HP’s Vertica solution, part of the HP HAVEn family.

Vertica analysis is to basically record any amount of data in real time and scale automatically on the HP hardware platform we also use. If we need more processing power, we can add more services to scale transparently. That enables us to get any amount of data, which we can then process.

We have two processing layers. Referring to our earlier discussion about conscious and subconscious activity, our conscious activity is visualizing that data, and that’s done with Tableau.

We have a number of Tableau reports and dashboards with each of our product solutions. That enables us to envision what’s happening and allows the organization, the guys running the network, and the guys looking at different elements in the data to make their own decisions and identify what they might do.

We also have a streaming analytics engine that listens to the data as it comes into the system before it goes to Vertica. If we spot the patterns we’ve identified earlier “subconsciously,” we’ll then act on that data, which may be reducing an alarm count. It may be "actioning" something.

It may be sending someone an email. It may be creating a trouble ticket on a different system. Those all happen transparently and automatically. It’s four layers simplifying the solution: data capture, data integration, visualization, and automatic analytics.

Developing high value

Gardner: And when you have the confidence to scale your underlying architecture and infrastructure, when you are able to visualize and develop high value to a vertical industry like a telco, this allows you to then expand into more lines of business in terms of products and services and also expand into move vertical. Where have you taken this in terms of the Zen family and then where do you take this now in terms of your market opportunity?

Stubley: We focus on mobile telcos. That’s our heritage. We can take any data source from a telco, but we can actually take any data source from anywhere, in any platform and any company. That ranges from binary to HTML. You name it, and if you’ve got data, we could load it.

That means we can build our processing accordingly. What we do is position what we call solution packs, and a solution pack is a connector to the outside world, to the network, and it grabs the data. We’ve got an element of data modeling there, so we can load the data into Vertica. Then, we have already built reports in Tableau that allows us to interrogate automatically. That’s at a component level.

Once you go to a number of components, we can then look horizontally across those different items and look at the behaviors that interact with each other. If you are looking at pure telco terms, we would be looking at different network devices, the end-to-end performance of the network, but the same would apply to a fraud scenario or could apply to someone who is running cable TV.
The very highest level is finding what problem you’re going to solve and then using the data to solve it.

So multi-play players are interesting because they want to monitor what’s happening with TV as well and that will fit in exactly in the same category. Realistically, anybody with high-volume, real-time data can take benefit from Vertica.

Another interesting play in this scenario is social gaming and online advertising. They all have similar data characteristics, very high volume and fixed data that needs to be analyzed and processed automatically.

Why Vertica?

Gardner: How long have you been using Vertica, and what is it that drove you to using it vis-à-vis alternatives?

Stubley: As far as the Zen family goes, we have used other technologies in the past, other relational databases, but we’ve used Vertica now for more than two-and-a-half years. We were looking for a platform that can scale and would give us real-time data. At the volumes we were looking at nothing could compete with Vertica at a sensible price. You can build yourself any solid solution with enough money, but we haven’t got too many customers who are prepared to make that investment.

So Vertica fits in with the technology of the 21st century. A lot of the relational database appliances are using 1980 thought processes. What’s happened with processing in the last few years is that nobody shares memory anymore, and our environment requires a non-shared memory solution. Vertica has been built on that basis. It was scaled without limit.

One of the areas we’re looking at that I mentioned earlier was social media. Social media is a very natural play for Hadoop, and Hadoop is clearly a very cost-effective platform for vast volumes of data at real-time data load, but very slow to analyze.
So the combination with a high-volume, low-cost platform for the bulk of data and a very high performing real-time analytics engine is very compelling. The challenge is going to be moving the data between the two environments. That isn’t going to go away. That’s not simple, and there is a number of approaches. HP Vertica is taking some.

There is Flex Zone, and there are any number of other players in that space. The reality is that you probably reach an environment where people are parallel loading the Hadoop and the Vertica. That’s what we probably plan to do. That gives you much more resilience. So for a lot of the data we’re putting into our system, we’re actually planning to put the raw data files into Hadoop, so we can reload them as necessary to improve the resilience of the overall system, too.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: