Wednesday, July 31, 2019

The budding storage relationship between HPE and Cohesity brings the best of startup innovation to global enterprise reach


The next BriefingsDirect enterprise storage partnership innovation discussion explores how the best of startup culture and innovation can be married to the global reach, maturity, and solutions breadth of a major IT provider.

Stay with us to unpack the budding relationship between an upstart in the data management space, Cohesity, and venerable global IT provider Hewlett Packard Enterprise (HPE).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in total storage efficiency strategies and HPE’s Pathfinder program we welcome Rob Salmon, President and Chief Operating Officer at Cohesity in San Jose, California, and Paul Glaser, Vice President and Head of the Pathfinder Program at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Paul, how have technology innovation, the nature of startups, and the pace of business today made a deep relationship between HPE and Cohesity the right fit for your mutual customers?

Glaser
Glaser: That’s an interesting question, Dana. To start, the technology ecosystem and the startup ecosystem in Silicon Valley, California -- as well as other tech centers on a global basis -- fuel the fire of innovation. And so, the ample funding that’s available to startups, the research that’s coming out of top tier universities such as Stanford, Carnegie Mellon, or MIT out on the East Coast, fuels a lot of interesting ideas -- disruptive ideas that lead their way into small startups.

The challenge for HPE as a large, global technology player is to figure out how to tap into the ecosystem of startups and the new disruptive technologies coming out of the universities, as well as serial entrepreneurs, foster and embrace that, and deliver those solutions and technologies to our customers.

Gardner: Paul, please describe the Pathfinder thesis and approach. What does it aim to do?

Insight, investment, and solutions

Glaser: Pathfinder, at the top level is the venture capital (VC) program of HPE and can be subdivided into three core functions. First is insight, second is investments, and third is the solutions function. The insight component acts like a center of excellence, it keeps a finger on the pulse, if you will, of disruptive innovation in the startup community. It helps HPE as a whole interact with the startup community, the VC community, and identifies and curates leading technology innovations that we can ultimately deliver to our customers.

The second component is investments. It’s fairly straight-forward. We act like a VC firm, taking small equity stakes in some of these startup companies.

https://www.cohesity.com/
And third, solutions. For the companies that are in our portfolio, we work with them to make introductions to product and technical organizations inside of HPE, fostering dialogue from a product evolution perspective and a solution perspective. We intertwine HPE’s products and technologies with the startup technology to create one-plus-one-equals-three. And we deliver that solution to customers and solve their challenges from a digital transformation perspective.

Gardner: How many startup companies are we talking about? How many in a year have typically been included in Pathfinder?

Glaser: We are a very focused program, so we align around the strategies for HPE. Because of that close collaboration with our portfolio companies and the business units, we are limited to about eight investments or new portfolio companies on an annual basis.

Today, the four-and-a-half-year-old program has about two dozen companies inside in the portfolio. We expect to add another eight over the next 12 months.

Gardner: Rob, tell us about Cohesity and why it’s such a good candidate, partner, and success story when it comes to the Pathfinder program.

Salmon: Cohesity is a five-year-old company focused on data management for about 70 to 80 percent of all the data in an enterprise today. This is for large enterprises trying to figure out the next great thing to make them more operationally efficient, and to give them better access to data.

Aron
Companies like HPE are doing exactly the same thing, looking to figure out how to bring new conversations to their customers and partners. We are a software-defined platform. The company was founded by Dr. Mohit Aron, who has spent his entire 20-plus-year career working on distributed file systems. He is one of the key architects of the Google File System and co-founder of Nutanix. The hyperconverged infrastructure (HCI) movement, really, was his brainchild.

He started Cohesity five years ago because he realized there was a new, better way to manage large sets of data. Not only in the data protection space, but for file services, test dev, and analytics. The company has been selling the product for more than two and a half years now, and we’ve been a partner with Paul and the HPE Pathfinder team for more than three years now. It’s been a quite successful partnership between the two companies.

Gardner: As I mentioned in my set-up, Rob, speed-to-value is the name of the game for businesses today. How have HPE and Cohesity together been able to help each other be faster to market for your customers?

One plus one equals three

Salmon: The partnership is complimentary. What HPE brings to Cohesity is experience and reach. We get a lot of value by working with Paul, his team, and the entire executive team at HPE to bring our product and solutions to market.

Salmon
When we think about the combination between the products from HPE and Cohesity, one-plus-one-equals-three-plus. That’s what customers are seeing as well. The largest customers we have in the world running Cohesity solutions run on HPE’s platform.

HPE brings credibility to a company of our size, in all areas of the world, and with large customers. We just could not do that on our own.

Gardner: And how does working with HPE specifically get you into these markets faster?

Salmon: In fact, we just announced an original equipment manufacturer (OEM) relationship with HPE whereby they are selling our solutions. We’re very excited about it.

I can give you a great example. I met with one of the largest healthcare providers in the world a year ago. They loved hearing about the solution. The question they had was, “Rob, how are you going to handle us? How will you support us?” And they said, “You are going to let us know, I’m sure.”

They immediately introduced me to the general manager of their account at HPE. We took that support question right off the table. Everything has been done through HPE. It’s our solution, wrapped around the broad support services and hardware capabilities of HPE. That made for a total solution for our customers, because that’s ultimately what these kinds of customers are looking for.

They are not just looking for great, new innovative solutions. They are looking for how they can roll that out at scale in their environments and be assured it’s going to work all the time.

Gardner: Paul, HPE has had huge market success in storage over the past several years, being on the forefront of flash and of bringing intelligence to how storage is managed on a holistic basis. How does the rest of storage, the so-called secondary level, fit into that? Where do you see this secondary storage market’s potential?

Glaser: HPE’s internal product strategy has been around the primary storage capability. You mentioned flash, so such brands as 3PAR and Nimble Storage. That’s where HPE has a lot of its own intellectual property today.

https://www.hpe.com/us/en/home.html
On the secondary storage side, we’ve looked to partners to round out our portfolio, and we will continue to do so going forward. Cohesity has become an important part of that partner portfolio for us.

But we think about more than just secondary storage from Cohesity. It’s really about data management. What does the data management lifecycle of the future look like? How do you get more insights on where your data is? How do you better utilize that?

Cohesity and that ecosystem will be an important part of how we think about rounding out our portfolio and addressing what is a tens of billions of dollars market opportunity for both companies.

Gardner: Rob, let’s dig into that total data management and lifecycle value. What are the drivers in the market making a holistic total approach to data necessary?

Cohesity makes data searchable, usable 

Salmon: When you look at the sheer size of the datasets that enterprises are dealing with today, there is an enormous data management copy problem. You have islands of infrastructures set up for different use cases for secondary data and storage. Oftentimes the end users don’t know where to look, and it may be in the wrong place. After a time, the data has to be moved.

The Cohesity platform indexes the data on ingest. We therefore have Google-like search capabilities across the entire platform, regardless of the use-case and how you want to use the data.

https://www.cohesity.com/

When we think about the legacy storage solutions out there for data protection, for example, all you can do is protect the data. You can’t do anything else. You can’t glean any insights from that data. Because of our indexing on ingest, we are able to provide insights into the data and metadata in ways unlike customers and enterprises have ever seen before. As we think about the opportunity, the larger the datasets that are run on the Cohesity platform and solution, the more insight customers can have into their data.

And it’s not just about our own applications. We recently introduced a marketplace where applications such as Splunk reside and can sit on top and access the data in the Cohesity platform. It’s about bringing compute, storage, networking, and the applications all together to where the data is, versus moving data to the compute and to the applications.

Gardner: It sounds like a solution tailor-made for many of the new requirements we’re seeing at the edge. That means massive amounts of data generated from the Internet of things (IoT) and the industrial Internet of things (IIoT). What are you doing with secondary storage and data management that aligns with the many things HPE is doing at the edge?

Seamless at the edge

Salmon: When you think about both the edge and the public cloud, the beauty of a next-generation solution like Cohesity is we are not redesigning something to take advantage of the edge or the public clouds. We can run a virtual edition of our software at the edge, and in public cloud. We have a multiple-cloud offering today.

https://www.cohesity.com/

So, from the edge all the way to on-premises and into public clouds it’s a seamless look at all of your data. You have access and visibility to all of the data without moving the data around.

Gardner: Paul, it sounds like there’s another level of alignment here, and it’s around HPE’s product strategies. With HPE InfoSight, OneView -- managing core-to-edge issues across multiple clouds as well as a hybrid cloud -- this all sounds quite well-positioned. Tell us more about the product strategy synergy between HPE and Cohesity.


Glaser: Dana, I think you hit it spot-on. HPE CEO Antonio Neri talks about a strategy for HPE that’s edge-centric, cloud-enabled, and data-driven. As we think about building our infrastructure capabilities -- both for on-premise data centers and extending out to the edge -- we are looking for partners that can help provide that software layer, in this case the data management capability, that extends our product portfolio across that hybrid cloud experience for our customers.

As you think about a product strategy for HPE, you really step up to the macro strategy, which is, how do we provide a solution for our customers that allows us to span from the edge all the way to the core data center? We look at partners that have similar capabilities and similar visions. We work through the OEMs and other types of partnership arrangements to embed that down into the product portfolio.

Gardner: Rob, anything to offer additionally on the alignment between Cohesity and HPE, particularly when it comes to the data lifecycle management?

Salmon: The partnership started with Pathfinder, and we are absolutely thrilled with the partnership we have with HPE’s Pathfinder group. But when we did the recent OEM partnership with HPE, it was actually with HPE’s storage business unit. That’s really interesting because as you think about competing or not, we are working directly with HPE’s storage group. This is very complementary to what they are doing.
When we did the recent OEM partnership with HPE, it was actually with HPE's storage business unit. That's really interesting because as you think about competing or not, we are working directly with HPE storage. This is very complementary to what they are doing.

We understand our swim lane. They understand our swim lane. And yet this gives HPE a far broader portfolio into environments where they are looking at what the competitors are doing. They are saying, “We now have a better solution for what we are up to in this particular area by working with Cohesity.”

We are excited not just to work with the Pathfinder group but by the opportunity we have with Antonio Neri’s entire team. We have been welcomed into the HPE family quite well over the last three years, and we are just getting started with the opportunity as we see it.

Gardner: Another area that is top-of-mind for businesses is not just the technology strategy, but the economics of IT and how it’s shifted given the cloud, Software as a Service (SaaS), and pay-on-demand models. Is there something about what HPE is doing with its GreenLake Flex Capacity approach that is attractive to Cohesity? Do you see the reception in your global market improved because of the opportunity to finance, acquire, and consume IT in a variety of different ways?

Flexibility increases startups’ strength 

Salmon: Without question! Large enterprises want to buy it the way they want to buy it, whether it be for personalized licenses or a subscription model. They want to dictate how it will be used in their environments. By working with HPE and GreenLake, we are able to offer the flexible options required to win in this market today.

Gardner: Paul, any thoughts about the economics of consuming IT and how Pathfinder might be attractive to more startups because of that?

Glaser: There are two points Rob touched on that are important. One, working with HPE as a large company, it’s a journey. As a startup you are looking for that introduction or that leg up that gives you visibility across the global HPE organization. That’s what Pathfinder provides. So, you start working directly with the Pathfinder organization, but then you have the ability to spread out across HPE.

http://www.hewlettpackardpathfinder.com/

For Cohesity, it’s led to the OEM agreement with the storage business unit. It is the ability to leverage different consumption models utilizing GreenLake, and some of our flexible pricing and flexible consumption offers.

The second point is Amazon Web Services has conditioned customers to think about pay-per-use. Customers are asking for that, and they are looking for flexibility. As a startup, that sometimes is hard to figure out -- how to economically provide that capability. Being able to partner with HPE and Pathfinder, to utilizing GreenLake or some of our other tools, it really provides them a leg up in terms of the conversation with customers. It helps them trust that the solution will be there and that somebody will be there to stand behind it over the coming years.

Gardner: Before we close out, I would like to peek in the crystal ball for the future. When you think about the alignment between Cohesity and HPE, and when we look at what we can anticipate -- an explosion of activity at the edge and rapidly growing public cloud market -- there is a gorilla in the room. It’s the new role for inference and artificial intelligence (AI), to bring more data-driven analytics to more places more rapidly.

Any thoughts about where the relationship between HPE and Cohesity will go on an AI tangent product strategy?

AI enhances data partnership 

Salmon: You touched earlier, Dana, on HPE InfoSight, and we are really excited about the opportunity to partner even closer with HPE on it. That’s an incredibly successful product in its own right. The opportunity for us to work closer and do some things together around InfoSight is exciting.

On the Cohesity side, we talk a lot about not just AI but machine learning (ML) and where we can go proactively to give customers insights into not only the data, but also the environment itself. It can be very predictive. We are working incredibly hard on that right now. And again, I think this is an area that is really just getting started in terms of what we are going to be able to do over a long period of time.

Gardner: Paul, anything to offer on the AI future?

Glaser: Rob touched on the immediate opportunity for the two companies to work together, which is around HPE InfoSight and marrying our capabilities in terms of predictability and ML around IT infrastructure and creative solutions around that.

As you extend the vision to being edge-centric, as you look into the future where applications become more edge-centric and compute is going to move toward the data at the edge, the lifecycle of what that data looks like from a data management perspective at the edge -- and where it ultimately resides -- is going to become an interesting opportunity. Some of the AI capabilities can provide insight on where the best place is for that computation, and for that data, to live. I think that will be interesting down the road.
As you extend the vision to being edge-centric, compute is going to move toward the data at the edge. The lifecycle of what that data looks like from a data management perspective at the edge is an interesting opportunity.

Gardner: Rob, for other startups that might be interested in working with a big vendor like HPE through a program like Pathfinder, any advice that you can offer?

Salmon: As a startup, you know you are good at something, and it’s typically around the technology itself. You may have a founder like Mohit Aron, who is absolutely brilliant in his own right in terms of what he has already done in the industry and what we are going to continue to do. But you have got to do all the building around that brilliance and that technology and turn it into a true solution.

And again, back to this notion of solution, the solution needs global scale, it’s giving the support to costumers, not just one experience with you, but what they are expecting to experience from the enterprises that support them. You can learn a lot from working with large enterprises. They may not be the ones to tell you exactly how you are going to code your product; we have got that figured out with the brilliance of a Mohit and the engineering team around him. But as we think about getting to scale, and scaling the operation in terms of what we are doing, leaning on someone like the Pathfinder group at HPE has helped us an awful lot.

Salmon: The other great thing about working with the Pathfinder group is, as Paul touched on earlier, they work with other portfolio companies. They are working with companies that may be in a little different space than we are, but they are seeing a similar challenge as we are.

https://www.cohesity.com/
How do you grow? How do you open up a market? How do you look at bringing the product to market in different ways? We talked about consumption pricing and the new consumption models. Since they are experiencing that with others, and what they have already done at HPE, we can benefit from that experience. So leveraging a large enterprise like an HPE and the Pathfinder group, for what they know and what they are good at, has been invaluable to Cohesity.

Gardner: Paul, for those organizations that might want to get involved with Pathfinder, where should they go and what would you guide them to in terms of becoming a potential fit?

Glaser: I’d just point them to hewlettpackardpathfinder.com. You can find information on the program there, contact information, portfolio companies, and that type of thing.

We also put out a set of perspectives that talk about some of our investment theses and you can see our areas of interest. So at a high level, we look for companies that are aligned to HPE’s core strategies, which is going to be around building up the hybrid IT business as well as the intelligent edge.


So we have those specific swim lanes from a strategic perspective. And then second is we are looking for folks who have demonstrated success from a product perspective, and so whether that’s a couple of initial customer wins and then needing help to scale that business, those are the types of opportunities that we are looking for. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

HPE’s Erik Vogel on what's driving success in hybrid cloud adoption and optimization


The next BriefingsDirect Voice of the Innovator discussion explores the latest insights into hybrid cloud success strategies.

As with the often ad hoc adoption of public cloud services by various groups across an enterprise, getting the right mix and operational coordination required of true hybrid cloud cannot be successful if it’s not well managed. While many businesses recognize there’s a hybrid cloud future, far fewer are adopting a hybrid cloud approach with due diligence, governance, and cost optimization.

Stay with us as we examine the innovation maturing around hybrid cloud models and operations and learn how proper common management of hybrid cloud can make or break the realization of its promised returns.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to explain how to safeguard successful hybrid cloud deployments and operations is Erik Vogel, Global Vice President of Hybrid IT and Cloud at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The cloud model was very attractive, people jumped into it, but like with many things, there are unintended consequences. What’s driving cloud and hybrid cloud adoption, and what’s holding people back?

Vogel: All enterprises are hybrid at this point, and whether they have accepted that realization depends on the client. But pretty much all of them are hybrid. They are all using a combination of on-premises, public cloud, and software-as-a-service (SaaS) solutions. They have brought all of that into the enterprise. There are very few enterprises we talk to that don’t have some hybrid mix already in place.

Hybrid is here, but needs rationalization

Vogel
But when we ask them how they got there; most have done it in an ad hoc fashion. Most have had developers who went out to one or multiple hyperscale cloud providers, or the business units went out and started to consume SaaS solutions, or IT organizations built their own on-premises solutions whether that’s an open private cloud or a Microsoft Azure Stack environment.

They have done all of this in pockets within the organization. Now, they are seeing the challenge of how to start managing and operating this in a consistent, common fashion. There are a lot of different solutions and technologies, yet everyone has their own operating model, own consoles, and own rules to work within.

And that is where we see our clients struggling. They don’t have a holistic strategy or approach to hybrid, but rather they’ve done it in this bespoke or ad hoc fashion. Now they realize they are going to have to take a step back to think this through and decide what is the right approach to enforce common governance and gain common management and operating principles, so that they’re not running 5, 6, 8 or even 10 different operating models. Rather, they need to ask, “How do we get back to where we started?” And that is a common operating model across the entire IT estate.

Gardner: IT traditionally over the years has had waves of adoption that led to heterogeneity that created complexity. Then that had to be managed. When we deal with multicloud and hybrid cloud, how is that different from the UNIX wars, or distributed computing, and N-tier computing? Why is cloud a more difficult heterogeneity problem to solve than the previous ones?

Vogel: It’s more challenging. It’s funny, we typically referred to what we used to see in the data center as the Noah’s Ark data center. You would typically walk into a data center and you’d see two of everything, two of every vendor, just about everything within the data center.
How to Better Manage
Multicloud Sprawl
And it was about 15 years ago when we started to consolidate all of that into common infrastructures, common platforms to reduce the operational complexity. It was an effort to reduce total cost of ownership (TCO) within the data center and to reduce that Noah’s Ark data center into common, standardized elements.

Now that pendulum is starting to swing back. It’s becoming more of a challenge because it’s now so easy to consume non-standard and heterogeneous solutions. Before there was still that gatekeeper to everything within the data center. Somebody had to make a decision that a certain piece of infrastructure or component would be deployed within the data center.

Now, we have developers go to a cloud and consume with just a swipe of a credit card, any of the three or four hybrid hyperscale solutions, and literally thousands of SaaS solutions. Just look at the Salesforce.com platform and all of the different options that surround that.

All of a sudden, we lost the gatekeeper. Now we are seeing sprawl toward more heterogeneous solutions occurring even much faster than what we saw 10 or 15 years ago with the Noah’s Ark data center.

https://community.hpe.com/t5/Shifting-to-Software-Defined/Today-s-Challenge-Remove-Complexity-from-Multi-cloud-Hybrid-IT/ba-p/7013497#.XEtGplVKiM8

The pendulum is definitely shifting back toward consuming lots of different solutions with lots of different capabilities and services. And we are seeing it moving much faster than it did before because of that loss of a gatekeeper.

Gardner: Another difference is that we’re talking mostly about services. By consuming things as services, we’re acquiring them not as a capital expenditure that has a three- to five-year cycle of renewal, this is on-demand consumption, as you use it.

That makes it more complicated, but it also makes it a problem that can be solved more easily. Is there something about the nature of an all-services’ hybrid and multicloud environment on an operations budget that makes it more solvable?

Services become the norm 

Vogel: Yes, absolutely. The economics definitely play into this. I have this vision that within the next five years, we will no longer call things “as a service” because it will be the norm, the standard. We will only refer to things that are not as a service, because as an industry we are seeing a push toward everything being consumed as a service.
From an operating standpoint, the idea of consuming and paying for only what we use is very, very attractive. ... [Before] we would end up overbuying, pay the full load, and still pay for full maintenance and support, too.

From an operating standpoint, the idea of consuming and paying for only what we use is very, very attractive. Again, if you look back 10 or 15 years, typically within a data center, we’d be buying for a three- or four-year lifespan. That forced us to make predictions as to what type of demand we would be placing on capital expenditures.

And what would happen? We would always overestimate. If you looked at utilization of CPU, of disk, of memory, they were always 20 to 25 percent; very low utilization, especially pre-virtualization. We would end up overbuying, pay the full load, and still pay for full maintenance and support, too.

There was very little ability to dial that up or down. The economic capability of being able to consume everything as a service is definitely changing the game, even for things you wouldn’t think of as a service, such as buying a server. Our enterprise customers are really taking notice of that because it gives them the ability to flex the expenditures as their business cycles go up and down.

Rarely do we see enterprises with constant demand for compute capacity. So, it’s very nice for them to be able to flex that up and down, adjust the normal seasonal effects within a business, and be able to flex that operating expense as their business fluctuates.

That is a key driver of moving everything to an as-a-service model, giving flexibility that just a few years ago we did not have.

Gardner: The good news is that these are services -- and we can manage them as services. The bad news is these are services coming from different providers with different economic and consumption models. There are different application programming interfaces (APIs), stock keeping unit (SKU) definitions, and management definitions that are unique to their own cloud organization. So how do we take advantage of the fact that it’s all services but conquer the fact that it’s from different organizations speaking, in effect, different languages?

Vogel: You’re getting to the heart of the challenge in terms of managing a hybrid environment. If you think about how applications are becoming more and more composed now, they are built with various different pieces, different services, that may or may not be on-premises solutions.

One of our clients, for example, has built an application for their sales teams that provides real-time client data and client analytics before a seller goes in and talks to a customer. And when you look at the complexity of that application, they are using Salesforce.com, they have an on-premises customer database, and they get point of sales solutions from another SaaS provider.
Why You Need Everything
As a Service
They also have analytics engines they get from one of the cloud hyperscalers. And all of this comes together to drive a mobile app that presents all of this information seamlessly to their end-user seller in real-time. They become better armed and have more information when they go meet with their end customer.

When we look at how these new applications or services – I don’t even call them applications because they are more services built from multiple applications -- they are crossing multiple service providers, multiple SaaS providers, and multiple hyperscalers.

And as you look at how we interface and connect with those, how we pass data, exchange information across these different service providers, you are absolutely right, the taxonomies are different, the APIs are different, the interfaces and operations challenges are different.

When that seller goes to make that call, and they bring up their iPad app and all of a sudden, there is no data or it hasn’t been refreshed in three months, who do you call? How do you start to troubleshoot that? How do you start to determine if it’s a Salesforce problem, a database problem, a third-party service provider problem? Maybe it’s my encrypted connection I had to install between Salesforce and my on-premises solution. Maybe it’s the mobile app. Maybe it’s a setting on the iPad itself.

https://community.hpe.com/t5/Shifting-to-Software-Defined/Is-Multi-Cloud-Sprawl-Causing-Your-Money-to-Fly-Away/ba-p/7016402#.XEtJM1VKiM-

Adding up all of that complexity is what’s building the problem. We don’t have consistent APIs, consistent taxonomies, or even the way we look at billing and the underlying components for billing. And when we break that out, it varies greatly between service providers.

This is where we understand the complexity of hybrid IT. We have all of these different service providers all working and operating independently. Yet we’re trying to bring them together to provide end-customer services. Composing those different services creates one of the biggest challenges we have today within hybrid cloud environment.

Gardner: Even if we solve the challenge on the functional level -- of getting the apps and services to behave as we want -- it seems as much or more a nightmare for the chief financial officer (CFO) who has to determine whether you’re getting a good deal or buying redundancy across different cloud providers. A lot of times in procurement you cut a deal on volume. But how you do that if you don’t know what you’re buying from whom?

How do we pay for these aggregate cloud services in some coordinated framework with the least amount of waste?

How to pay the bills 

Vogel: That is probably one of the most difficult jobs within IT today, the finance side of it. There are a lot of challenges of putting that bill together. What does that bill really look like? And not just at an individual component level. I may be able to see what I’m paying from Amazon Web Services (AWS) or what Azure Stack is costing me. But how do we aggregate that? What is the cost to provide a service? And this has been a challenge for IT forever. It’s always been difficult to slice it by service.

We knew what compute costs, what network costs, and what the storage costs were. But it was always difficult to make that vertical slice across the budget. And now we have made that problem worse because we have all these different bills coming in from all of these different service providers.


The procurement challenge is even more acute because now we have these different service providers. How do we know what we are really paying? Developers swipe credit cards, where they don’t even see the bill or a true accounting of what’s being spent across the public clouds. It comes through as a credit card expense and so not really directed to IT.

We need to get our hands around these different expenses, where we are spending money, and think differently about our procurement models for these services.

In the past, we talked about this as a brokerage but it’s a lot more than that. It’s more about strategic sourcing procurement models for cloud and hybrid cloud-related services.
Our IT procurement models have to change to address the problem of how we really know what we are paying for. Are we getting the strategic value out of the expenses within hybrid that we had expected?

It’s less about brokerage and looking for that lowest-cost provider and trying to reduce the spend. It’s more about, are we getting the service-level agreements (SLAs) we are paying for? Are we getting the services we are paying for? Are we getting the uptime we are paying for?

Our IT procurement models have to change to address the problem of how we really know what we are paying for. Are we getting the strategic value out of the expenses within hybrid that we had expected?

Gardner: In business over the years, when you have a challenge, you can try to solve it yourself and employ intelligence technologies to tackle complexity. Another way is to find a third-party that knows the business better than you do, especially for small- to medium-sized businesses (SMBs).

Are we starting to see an ecosystem develop where the consumption model for cloud services is managed more centrally, and then those services are repurposed and resold to the actual consumer business?

Third-parties help hybrid manage costs 

Vogel: Yes, I am definitely starting to see that. There’s a lot is being developed to help customers in terms of consuming and buying these services and being smarter about it. I always joke that the cheapest thing you can buy is somebody else’s experience, and that is absolutely the case when it comes to hybrid cloud services providers.

The reality is no enterprise can have expertise in all three of the hyperscalers, in all of the hundreds of SaaS providers, for all of the on-premises solutions that are out there. It just doesn’t exist. You just can’t do it all.

It really becomes important to look for people who can aggregate this capability and bring the collective experience back to you. You have to reduce overspend and make smarter purchasing decisions. You can prevent things like lock-in to and reduce the risk of buying via these third-party services. There is tremendous value being created by these firms that are jumping into that model and helping clients address these challenges.

The third-parties have people who have actually gone out and consumed and purchased within the hyperscalers, who have run workloads within those environments, and who can help predict what the true cost should be -- and, more importantly, maintain that optimization going forward.
How to Remove Complexity
From Multicloud and Hybrid IT
It’s not just about going in and buying anymore. There is ongoing optimization that has to incur, ongoing cost optimization where we’re continuously evaluating about the right decisions. And we are finding that the calculus changes over time.

So, while it might have made a lot of sense to put a workload, for example, on-premises today, based on the demand for that application and on pricing changes, it may make more sense to move that same workload off-premises tomorrow. And then in the future it may also make sense to bring it back on-premises for a variety of reasons.

You have to constantly be evaluating that. That’s where a lot of the firms playing in the space can add a lot of value now, in helping with ongoing optimization, by making sure that we are always making the smart decision. It’s a very dynamic ecosystem, and the calculus, the metrics are constantly changing. We have the ability to constantly reevaluate. That’s the beauty of cloud, it’s the ability to flex between these different providers.

Gardner: Erik, for those organizations interested in getting a better handle on this, are there any practical approaches available now?

The right mix of data and advice 

Vogel: We have a tool, called HPE Right Mix Advisor, which is our ability to go in and assess very large application portfolios. The nice thing is, it scales up and down very nicely. It is delivered in a service model so we are able to go in and assess a set of applications against the variables I mentioned, in the weighing of the factors, and come up with a concrete list of recommendations as to what should our clients do right now.

In fact, we like to talk not about the thousand things they could do -- but what are the 10 or 20 things they should start on tomorrow morning. The ones that are most impactful for their business.

The Right Mix Advisor tool helps identify those things that matter the most for the business right now, and provides a tactical plan to say, “This is what we should start on.”

https://www.hpe.com/us/en/home.html
And it’s not just the tool, we also bring our expertise, whether that’s from our Cloud Technology Partners (CTP) acquisition, RedPixie, or our existing HPE business where we have done this for years and years. So, it’s not just the tool, but also experts, looking at that data, helping to refine that data, and coming up with a smart list that makes sense for our clients to get started on right now.

And of course, once they have accomplished those things, we can come back and look at it again and say, “Here is your next list, the next 10 or 20 things.” And that’s really how Right Mix Advisor was designed to work.

Gardner: It seems to me there would be a huge advantage if you were able to get enough data about what’s going on at the market level, that is to aggregate how the cloud providers are selling, charging, and the consumption patterns.

If you were in a position to gather all of the data about enterprise consumption among and between the cloud providers, you would have a much better idea of how to procure properly, manage properly, and optimize. Is such a data well developing? Is there anyone in the right position to be able to gather the data and start applying machine learning (ML) technologies to develop predictions about the best course of action for a hybrid cloud or hybrid IT environment?

Vogel: Yes. In fact, we have started down that path. HPE has started to tackle this by developing an expert system, a set of logic rules that helps make those decisions. We did it by combining a couple of fairly large datasets that we have developed over the last 15 years, primarily with HPE’s history of doing a lot of application migration work. We really understand on the on-premises side where applications should reside based on how they are architected and what the requirements are, and what type of performance needs to be derived from that application.
HPE has developed an expert system, a set of logic rules, that helps make those hybrid decisions. We did it by combining a couple of fairly large datasets that we have developed over the last 15 years. We understand the on-premises side ... We have now combined that with our other datasets from our acquisitions of CTP and RedPixie.

We have combined that with other datasets from some of our recent cloud acquisitions, CTP and RedPixie, for example. That has brought us a huge wealth of information based on a tremendous number of application migrations to the public clouds. And we are able to combine those datasets and develop this expert system that allows us to make those decisions pretty quickly as to where applications should reside based on a number of factors. Right now, we look at about 60 different variables.

But what’s really important when we do that is to understand from a client’s perspective what matters. This is why I go back to that strategic sourcing discussion. It’s easy to go in and assume that every client wants to reduce cost. And while every client wants to do that -- no one would ever say no to that -- usually that’s not the most important thing. Clients are worried about performance. They also want to drive agility, and faster time to market. To them that is more important than the amount they will save from a cost-reduction perspective.

The first thing we do when we run our expert system, is we go in and weight the variables based on what’s important to that specific client, aligned to their strategy. This is where it gets challenging for any enterprise trying to make smart decisions. In order to make strategic sourcing decisions, you have to understand strategically what’s important to your business. You have to make intelligent decisions about where workloads should go across the hybrid IT options that you have. So we run an expert system to help make those decisions.

Now, as we collect more data, this will move toward more artificial intelligence (AI). I am sure everybody is aware AI requires a lot of data, since we are still in the very early stages of true hybrid cloud and hybrid IT. We don’t have a massive enough dataset yet to make these decisions in a truly automated or learning-type model.

We started with an expert system to help us do that, to move down that path. But very quickly we are learning, and we are building those learnings into our models that we use to make decisions.

So, yes, there is a lot of value in people who have been there and done that. Being able to bring that data together in a unified fashion is exactly what we have done to help our clients. These decisions can take a year to figure out. You have to be able to make these decisions quickly because it’s a very dynamic model. A lot of things are constantly changing. You have to keep loading the models with the latest and greatest data so you are always making the best, smartest decision, and always optimizing the environment.

Innovation, across the enterprise 

Gardner: Not that long ago, innovation in a data center was about speeds and feeds. You would innovate on technology and pass along those fruits to your consumers. But now we have innovated on economics, management, and understanding indirect and direct procurement models. We have had to innovate around intelligence technologies and AI. We have had to innovate around making the right choices -- not just on cost but on operations benefits like speed and agility.

How has innovation changed such that it used to be a technology innovation but now cuts across so many different dynamic principles of business?

Vogel: It’s a really interesting observation. That’s exactly what’s happening. You are right, even as recently as five years ago we talked about speeds and feeds, trying to squeeze a little more out of every processor, trying to enhance the speed of the memory or the storage devices.

But now, as we have pivoted toward a services mentality, nobody asks when you buy from a hyperscaler -- Google Cloud, for example -- what central processing unit (CPU) chips they are running or what the chip speeds are. That’s not really relevant in an as-a-service world. So, the innovation then is around the service sets, the economic models, the pricing models, that’s really where innovation is being driven.

At HPE, we have moved in that direction as well. We provide our HPE GreenLake model and offer a flex-capacity approach where clients can buy capacity on-demand. And it becomes about buying compute capacity. How we provide that, what speeds and feeds we are providing becomes less and less important. It’s the innovation around the economic model that our clients are looking for.

We are only going to continue to see that type of innovation going forward, where it’s less about the underlying components. In reality, if you are buying the service, you don’t care what sort of chips and speeds and feeds are being provided on the back end as long as you are getting the service you have asked for, with the SLA, the uptime, the reliability, and the capabilities you need. All of what sits behind that becomes less and less important.

Think about how you buy electricity. You just expect 110 volts at 60 hertz coming out of the wall, and you expect it to be on all the time. You expect it to be consistent, reliable, and safely delivered to you. How it gets generated, where it gets generated -- whether it’s a wind turbine, a coal-burning plant, a nuclear plant -- that’s not important to you. If it’s produced in one state and transferred to another over the grid, or if it’s produced in your local state, that all becomes less important. What really matters is that you are getting consistent and reliable services you can count on.
And we are seeing the same thing within IT as we move to that service model. The speeds and feeds, the infrastructure, become less important. All of the innovation is now being driven around the as-a-service model and what it takes to provide that service. We innovate at the service level, whether that’s for flex capacity or management services, in a true as-a-service capability.

Gardner: What do your consumer organizations need to think about to be innovative on their side? How can they be in a better position to consume these services such as hybrid IT management-as-a-service, hybrid cloud decision making, and the right mixture of decisions-as-a-service?

What comes next when it comes to how the enterprise IT organization needs to shift?

Business cycles speed IT up 

Vogel: At a business level, within almost every market or every industry, we are moving from what used to be slow-cycle business to standard-cycles. In a lot of cases it’s moving from standard-cycle business to a fast-cycle business. Even businesses that were traditionally slow-cycle or standard-cycle are accelerating. This underlying technology is creating that.

So every company is a technology company. That is becoming more and more true every day. As a result, it’s driving business cycles faster and faster. So, IT, in order to support those business cycles, has to move at that same speed.

And we see enterprises moving away from a traditional IT model when those enterprises’ IT cannot move at the speed the business is demanding. We will still see IT, for example, take six months to provide a platform when the business says, “I need it in 20 minutes.”

We will see a split between traditional IT and a digital innovation group within the enterprise. This group will be owned by the business unit as opposed to core IT.

So, businesses are responding to IT not being able to move fast enough and not being able to provide the responsiveness and the level of service by going out and looking outside and consuming services externally.
At HPE, as we look at some of the services we have announced, they are to help our clients move faster and to provide operational support and management for hybrid to remove that burden from IT so they can focus on the things that accelerate their businesses.

As we move forward, how can clients start to move in this direction? At HPE, as we look at some of the services we have announced and will be rolling out in the next six-12 months, they are to help our clients move faster. They are designed to provide operational support and management for hybrid to take that burden away from IT, especially where IT may not have the skill sets or capability and be able to provide that seamless operating experience to our IT customers. Those customers need to focus on the things that accelerate their business -- that is what the business units are demanding.

To stay relevant, IT is going to have to do that, too. They are going to have to look for help and support so that they can move at the same speed and pace that businesses are demanding today. And I don’t see that slowing down. I don’t think anybody sees that slowing down; if anything, we see the pace continuing to accelerate.

When I talked about fast-cycle -- where services or solutions we put into the market may have had a market shelf life of two to three years -- we are seeing it compressed to six months. It’s amazing how fast competition comes in even if we are doing innovative type of solutions. So, IT has to accelerate at that speed as well.

The HPE GreenLake hybrid cloud offering, for example, gives our clients the ability to operate at that speed by providing managed services capabilities across the hybrid estate. It provides a consistent platform, and then allows them to innovate on top of it. It takes away the management operation from their focus and lets them focus on what matters to the business today, which is innovation.

Gardner: For you personally, Erik, where do you get inspiration for innovation? How do you think out of the box when we can now see that that’s a necessary requirement? 

Inspired by others 

Vogel: One of the best parts about my job is the time I get to spend with our customers and to really understand what their challenges are and what they are doing. One of the things we look at are adjacent businesses.

We try to learn what is working well in retail, for example. What innovation is there and what lessons learned can we apply elsewhere? A lot of times the industry shifts so quickly that we don’t have all of the answers. We can’t take a product-out approach any longer. We really have to start looking at the customers’ back end. And I think having kind of that broad view and looking outside is really helping us. It’s where we are getting a lot of our inspiration.

https://www.hpe.com/us/en/home.html

For example, we are really focused on the overall experience that our clients have with HPE, and trying to drive a very consistent, standardized, easy-to-choose type of experience with us as a company. And it’s interesting as an engineering company, with a lot of good development and engineering capabilities, that we tend to look at it from a product-out view. We build a portal that they can work within, we create better products, and we get that out in front of the customer.

But by looking outside, we are saying, “Wait a minute, what is it, for example, about Uber that everybody likes?” It’s not necessarily that their app is good, but it’s really about the clean car, it’s about not having to pay when you get out of the car, not have to fumble for a credit card. It’s about seeing a map and knowing where the driver is. It’s about a predictable cost, where you know what it’s going to cost. And that experience, that overall experience is what makes Uber, Uber. It’s not just creating an app and saying, “Well, the app is the experience.”

We are learning a lot from adjacent businesses, adjacent industries, and incorporating that into what we are doing. It’s just part of that as-a-service mentality where we have to think about the experience our customers are asking for and how do we start building solutions that meet that experience requirement -- not just the technical requirement. We are very good at that, but how do we start to meet that experience requirement?
How to Develop Hybrid
Cloud Strategies With Confidence
And this has been a real eye-opener for me personally. It has been a really fun part of the job, to look at the experience we are trying to create. How do we think differently? Rather than producing products and putting them out into the market, how do we think about creating that experience first and then designing and creating the solutions that sit underneath it?


When you talk about where we get inspiration, it’s really about looking at those adjacencies. It’s understanding what’s happening in the broader as-a-service market and taking the best of what’s happening and saying, “How can we employ those types of techniques, those tricks, those lessons learned into what we are doing?” And that’s really driving a lot of our development and inspiration in terms of how we are innovating as a company within HPE.