Wednesday, July 31, 2019

HPE’s Erik Vogel on what's driving success in hybrid cloud adoption and optimization


The next BriefingsDirect Voice of the Innovator discussion explores the latest insights into hybrid cloud success strategies.

As with the often ad hoc adoption of public cloud services by various groups across an enterprise, getting the right mix and operational coordination required of true hybrid cloud cannot be successful if it’s not well managed. While many businesses recognize there’s a hybrid cloud future, far fewer are adopting a hybrid cloud approach with due diligence, governance, and cost optimization.

Stay with us as we examine the innovation maturing around hybrid cloud models and operations and learn how proper common management of hybrid cloud can make or break the realization of its promised returns.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to explain how to safeguard successful hybrid cloud deployments and operations is Erik Vogel, Global Vice President of Hybrid IT and Cloud at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The cloud model was very attractive, people jumped into it, but like with many things, there are unintended consequences. What’s driving cloud and hybrid cloud adoption, and what’s holding people back?

Vogel: All enterprises are hybrid at this point, and whether they have accepted that realization depends on the client. But pretty much all of them are hybrid. They are all using a combination of on-premises, public cloud, and software-as-a-service (SaaS) solutions. They have brought all of that into the enterprise. There are very few enterprises we talk to that don’t have some hybrid mix already in place.

Hybrid is here, but needs rationalization

Vogel
But when we ask them how they got there; most have done it in an ad hoc fashion. Most have had developers who went out to one or multiple hyperscale cloud providers, or the business units went out and started to consume SaaS solutions, or IT organizations built their own on-premises solutions whether that’s an open private cloud or a Microsoft Azure Stack environment.

They have done all of this in pockets within the organization. Now, they are seeing the challenge of how to start managing and operating this in a consistent, common fashion. There are a lot of different solutions and technologies, yet everyone has their own operating model, own consoles, and own rules to work within.

And that is where we see our clients struggling. They don’t have a holistic strategy or approach to hybrid, but rather they’ve done it in this bespoke or ad hoc fashion. Now they realize they are going to have to take a step back to think this through and decide what is the right approach to enforce common governance and gain common management and operating principles, so that they’re not running 5, 6, 8 or even 10 different operating models. Rather, they need to ask, “How do we get back to where we started?” And that is a common operating model across the entire IT estate.

Gardner: IT traditionally over the years has had waves of adoption that led to heterogeneity that created complexity. Then that had to be managed. When we deal with multicloud and hybrid cloud, how is that different from the UNIX wars, or distributed computing, and N-tier computing? Why is cloud a more difficult heterogeneity problem to solve than the previous ones?

Vogel: It’s more challenging. It’s funny, we typically referred to what we used to see in the data center as the Noah’s Ark data center. You would typically walk into a data center and you’d see two of everything, two of every vendor, just about everything within the data center.
How to Better Manage
Multicloud Sprawl
And it was about 15 years ago when we started to consolidate all of that into common infrastructures, common platforms to reduce the operational complexity. It was an effort to reduce total cost of ownership (TCO) within the data center and to reduce that Noah’s Ark data center into common, standardized elements.

Now that pendulum is starting to swing back. It’s becoming more of a challenge because it’s now so easy to consume non-standard and heterogeneous solutions. Before there was still that gatekeeper to everything within the data center. Somebody had to make a decision that a certain piece of infrastructure or component would be deployed within the data center.

Now, we have developers go to a cloud and consume with just a swipe of a credit card, any of the three or four hybrid hyperscale solutions, and literally thousands of SaaS solutions. Just look at the Salesforce.com platform and all of the different options that surround that.

All of a sudden, we lost the gatekeeper. Now we are seeing sprawl toward more heterogeneous solutions occurring even much faster than what we saw 10 or 15 years ago with the Noah’s Ark data center.

https://community.hpe.com/t5/Shifting-to-Software-Defined/Today-s-Challenge-Remove-Complexity-from-Multi-cloud-Hybrid-IT/ba-p/7013497#.XEtGplVKiM8

The pendulum is definitely shifting back toward consuming lots of different solutions with lots of different capabilities and services. And we are seeing it moving much faster than it did before because of that loss of a gatekeeper.

Gardner: Another difference is that we’re talking mostly about services. By consuming things as services, we’re acquiring them not as a capital expenditure that has a three- to five-year cycle of renewal, this is on-demand consumption, as you use it.

That makes it more complicated, but it also makes it a problem that can be solved more easily. Is there something about the nature of an all-services’ hybrid and multicloud environment on an operations budget that makes it more solvable?

Services become the norm 

Vogel: Yes, absolutely. The economics definitely play into this. I have this vision that within the next five years, we will no longer call things “as a service” because it will be the norm, the standard. We will only refer to things that are not as a service, because as an industry we are seeing a push toward everything being consumed as a service.
From an operating standpoint, the idea of consuming and paying for only what we use is very, very attractive. ... [Before] we would end up overbuying, pay the full load, and still pay for full maintenance and support, too.

From an operating standpoint, the idea of consuming and paying for only what we use is very, very attractive. Again, if you look back 10 or 15 years, typically within a data center, we’d be buying for a three- or four-year lifespan. That forced us to make predictions as to what type of demand we would be placing on capital expenditures.

And what would happen? We would always overestimate. If you looked at utilization of CPU, of disk, of memory, they were always 20 to 25 percent; very low utilization, especially pre-virtualization. We would end up overbuying, pay the full load, and still pay for full maintenance and support, too.

There was very little ability to dial that up or down. The economic capability of being able to consume everything as a service is definitely changing the game, even for things you wouldn’t think of as a service, such as buying a server. Our enterprise customers are really taking notice of that because it gives them the ability to flex the expenditures as their business cycles go up and down.

Rarely do we see enterprises with constant demand for compute capacity. So, it’s very nice for them to be able to flex that up and down, adjust the normal seasonal effects within a business, and be able to flex that operating expense as their business fluctuates.

That is a key driver of moving everything to an as-a-service model, giving flexibility that just a few years ago we did not have.

Gardner: The good news is that these are services -- and we can manage them as services. The bad news is these are services coming from different providers with different economic and consumption models. There are different application programming interfaces (APIs), stock keeping unit (SKU) definitions, and management definitions that are unique to their own cloud organization. So how do we take advantage of the fact that it’s all services but conquer the fact that it’s from different organizations speaking, in effect, different languages?

Vogel: You’re getting to the heart of the challenge in terms of managing a hybrid environment. If you think about how applications are becoming more and more composed now, they are built with various different pieces, different services, that may or may not be on-premises solutions.

One of our clients, for example, has built an application for their sales teams that provides real-time client data and client analytics before a seller goes in and talks to a customer. And when you look at the complexity of that application, they are using Salesforce.com, they have an on-premises customer database, and they get point of sales solutions from another SaaS provider.
Why You Need Everything
As a Service
They also have analytics engines they get from one of the cloud hyperscalers. And all of this comes together to drive a mobile app that presents all of this information seamlessly to their end-user seller in real-time. They become better armed and have more information when they go meet with their end customer.

When we look at how these new applications or services – I don’t even call them applications because they are more services built from multiple applications -- they are crossing multiple service providers, multiple SaaS providers, and multiple hyperscalers.

And as you look at how we interface and connect with those, how we pass data, exchange information across these different service providers, you are absolutely right, the taxonomies are different, the APIs are different, the interfaces and operations challenges are different.

When that seller goes to make that call, and they bring up their iPad app and all of a sudden, there is no data or it hasn’t been refreshed in three months, who do you call? How do you start to troubleshoot that? How do you start to determine if it’s a Salesforce problem, a database problem, a third-party service provider problem? Maybe it’s my encrypted connection I had to install between Salesforce and my on-premises solution. Maybe it’s the mobile app. Maybe it’s a setting on the iPad itself.

https://community.hpe.com/t5/Shifting-to-Software-Defined/Is-Multi-Cloud-Sprawl-Causing-Your-Money-to-Fly-Away/ba-p/7016402#.XEtJM1VKiM-

Adding up all of that complexity is what’s building the problem. We don’t have consistent APIs, consistent taxonomies, or even the way we look at billing and the underlying components for billing. And when we break that out, it varies greatly between service providers.

This is where we understand the complexity of hybrid IT. We have all of these different service providers all working and operating independently. Yet we’re trying to bring them together to provide end-customer services. Composing those different services creates one of the biggest challenges we have today within hybrid cloud environment.

Gardner: Even if we solve the challenge on the functional level -- of getting the apps and services to behave as we want -- it seems as much or more a nightmare for the chief financial officer (CFO) who has to determine whether you’re getting a good deal or buying redundancy across different cloud providers. A lot of times in procurement you cut a deal on volume. But how you do that if you don’t know what you’re buying from whom?

How do we pay for these aggregate cloud services in some coordinated framework with the least amount of waste?

How to pay the bills 

Vogel: That is probably one of the most difficult jobs within IT today, the finance side of it. There are a lot of challenges of putting that bill together. What does that bill really look like? And not just at an individual component level. I may be able to see what I’m paying from Amazon Web Services (AWS) or what Azure Stack is costing me. But how do we aggregate that? What is the cost to provide a service? And this has been a challenge for IT forever. It’s always been difficult to slice it by service.

We knew what compute costs, what network costs, and what the storage costs were. But it was always difficult to make that vertical slice across the budget. And now we have made that problem worse because we have all these different bills coming in from all of these different service providers.


The procurement challenge is even more acute because now we have these different service providers. How do we know what we are really paying? Developers swipe credit cards, where they don’t even see the bill or a true accounting of what’s being spent across the public clouds. It comes through as a credit card expense and so not really directed to IT.

We need to get our hands around these different expenses, where we are spending money, and think differently about our procurement models for these services.

In the past, we talked about this as a brokerage but it’s a lot more than that. It’s more about strategic sourcing procurement models for cloud and hybrid cloud-related services.
Our IT procurement models have to change to address the problem of how we really know what we are paying for. Are we getting the strategic value out of the expenses within hybrid that we had expected?

It’s less about brokerage and looking for that lowest-cost provider and trying to reduce the spend. It’s more about, are we getting the service-level agreements (SLAs) we are paying for? Are we getting the services we are paying for? Are we getting the uptime we are paying for?

Our IT procurement models have to change to address the problem of how we really know what we are paying for. Are we getting the strategic value out of the expenses within hybrid that we had expected?

Gardner: In business over the years, when you have a challenge, you can try to solve it yourself and employ intelligence technologies to tackle complexity. Another way is to find a third-party that knows the business better than you do, especially for small- to medium-sized businesses (SMBs).

Are we starting to see an ecosystem develop where the consumption model for cloud services is managed more centrally, and then those services are repurposed and resold to the actual consumer business?

Third-parties help hybrid manage costs 

Vogel: Yes, I am definitely starting to see that. There’s a lot is being developed to help customers in terms of consuming and buying these services and being smarter about it. I always joke that the cheapest thing you can buy is somebody else’s experience, and that is absolutely the case when it comes to hybrid cloud services providers.

The reality is no enterprise can have expertise in all three of the hyperscalers, in all of the hundreds of SaaS providers, for all of the on-premises solutions that are out there. It just doesn’t exist. You just can’t do it all.

It really becomes important to look for people who can aggregate this capability and bring the collective experience back to you. You have to reduce overspend and make smarter purchasing decisions. You can prevent things like lock-in to and reduce the risk of buying via these third-party services. There is tremendous value being created by these firms that are jumping into that model and helping clients address these challenges.

The third-parties have people who have actually gone out and consumed and purchased within the hyperscalers, who have run workloads within those environments, and who can help predict what the true cost should be -- and, more importantly, maintain that optimization going forward.
How to Remove Complexity
From Multicloud and Hybrid IT
It’s not just about going in and buying anymore. There is ongoing optimization that has to incur, ongoing cost optimization where we’re continuously evaluating about the right decisions. And we are finding that the calculus changes over time.

So, while it might have made a lot of sense to put a workload, for example, on-premises today, based on the demand for that application and on pricing changes, it may make more sense to move that same workload off-premises tomorrow. And then in the future it may also make sense to bring it back on-premises for a variety of reasons.

You have to constantly be evaluating that. That’s where a lot of the firms playing in the space can add a lot of value now, in helping with ongoing optimization, by making sure that we are always making the smart decision. It’s a very dynamic ecosystem, and the calculus, the metrics are constantly changing. We have the ability to constantly reevaluate. That’s the beauty of cloud, it’s the ability to flex between these different providers.

Gardner: Erik, for those organizations interested in getting a better handle on this, are there any practical approaches available now?

The right mix of data and advice 

Vogel: We have a tool, called HPE Right Mix Advisor, which is our ability to go in and assess very large application portfolios. The nice thing is, it scales up and down very nicely. It is delivered in a service model so we are able to go in and assess a set of applications against the variables I mentioned, in the weighing of the factors, and come up with a concrete list of recommendations as to what should our clients do right now.

In fact, we like to talk not about the thousand things they could do -- but what are the 10 or 20 things they should start on tomorrow morning. The ones that are most impactful for their business.

The Right Mix Advisor tool helps identify those things that matter the most for the business right now, and provides a tactical plan to say, “This is what we should start on.”

https://www.hpe.com/us/en/home.html
And it’s not just the tool, we also bring our expertise, whether that’s from our Cloud Technology Partners (CTP) acquisition, RedPixie, or our existing HPE business where we have done this for years and years. So, it’s not just the tool, but also experts, looking at that data, helping to refine that data, and coming up with a smart list that makes sense for our clients to get started on right now.

And of course, once they have accomplished those things, we can come back and look at it again and say, “Here is your next list, the next 10 or 20 things.” And that’s really how Right Mix Advisor was designed to work.

Gardner: It seems to me there would be a huge advantage if you were able to get enough data about what’s going on at the market level, that is to aggregate how the cloud providers are selling, charging, and the consumption patterns.

If you were in a position to gather all of the data about enterprise consumption among and between the cloud providers, you would have a much better idea of how to procure properly, manage properly, and optimize. Is such a data well developing? Is there anyone in the right position to be able to gather the data and start applying machine learning (ML) technologies to develop predictions about the best course of action for a hybrid cloud or hybrid IT environment?

Vogel: Yes. In fact, we have started down that path. HPE has started to tackle this by developing an expert system, a set of logic rules that helps make those decisions. We did it by combining a couple of fairly large datasets that we have developed over the last 15 years, primarily with HPE’s history of doing a lot of application migration work. We really understand on the on-premises side where applications should reside based on how they are architected and what the requirements are, and what type of performance needs to be derived from that application.
HPE has developed an expert system, a set of logic rules, that helps make those hybrid decisions. We did it by combining a couple of fairly large datasets that we have developed over the last 15 years. We understand the on-premises side ... We have now combined that with our other datasets from our acquisitions of CTP and RedPixie.

We have combined that with other datasets from some of our recent cloud acquisitions, CTP and RedPixie, for example. That has brought us a huge wealth of information based on a tremendous number of application migrations to the public clouds. And we are able to combine those datasets and develop this expert system that allows us to make those decisions pretty quickly as to where applications should reside based on a number of factors. Right now, we look at about 60 different variables.

But what’s really important when we do that is to understand from a client’s perspective what matters. This is why I go back to that strategic sourcing discussion. It’s easy to go in and assume that every client wants to reduce cost. And while every client wants to do that -- no one would ever say no to that -- usually that’s not the most important thing. Clients are worried about performance. They also want to drive agility, and faster time to market. To them that is more important than the amount they will save from a cost-reduction perspective.

The first thing we do when we run our expert system, is we go in and weight the variables based on what’s important to that specific client, aligned to their strategy. This is where it gets challenging for any enterprise trying to make smart decisions. In order to make strategic sourcing decisions, you have to understand strategically what’s important to your business. You have to make intelligent decisions about where workloads should go across the hybrid IT options that you have. So we run an expert system to help make those decisions.

Now, as we collect more data, this will move toward more artificial intelligence (AI). I am sure everybody is aware AI requires a lot of data, since we are still in the very early stages of true hybrid cloud and hybrid IT. We don’t have a massive enough dataset yet to make these decisions in a truly automated or learning-type model.

We started with an expert system to help us do that, to move down that path. But very quickly we are learning, and we are building those learnings into our models that we use to make decisions.

So, yes, there is a lot of value in people who have been there and done that. Being able to bring that data together in a unified fashion is exactly what we have done to help our clients. These decisions can take a year to figure out. You have to be able to make these decisions quickly because it’s a very dynamic model. A lot of things are constantly changing. You have to keep loading the models with the latest and greatest data so you are always making the best, smartest decision, and always optimizing the environment.

Innovation, across the enterprise 

Gardner: Not that long ago, innovation in a data center was about speeds and feeds. You would innovate on technology and pass along those fruits to your consumers. But now we have innovated on economics, management, and understanding indirect and direct procurement models. We have had to innovate around intelligence technologies and AI. We have had to innovate around making the right choices -- not just on cost but on operations benefits like speed and agility.

How has innovation changed such that it used to be a technology innovation but now cuts across so many different dynamic principles of business?

Vogel: It’s a really interesting observation. That’s exactly what’s happening. You are right, even as recently as five years ago we talked about speeds and feeds, trying to squeeze a little more out of every processor, trying to enhance the speed of the memory or the storage devices.

But now, as we have pivoted toward a services mentality, nobody asks when you buy from a hyperscaler -- Google Cloud, for example -- what central processing unit (CPU) chips they are running or what the chip speeds are. That’s not really relevant in an as-a-service world. So, the innovation then is around the service sets, the economic models, the pricing models, that’s really where innovation is being driven.

At HPE, we have moved in that direction as well. We provide our HPE GreenLake model and offer a flex-capacity approach where clients can buy capacity on-demand. And it becomes about buying compute capacity. How we provide that, what speeds and feeds we are providing becomes less and less important. It’s the innovation around the economic model that our clients are looking for.

We are only going to continue to see that type of innovation going forward, where it’s less about the underlying components. In reality, if you are buying the service, you don’t care what sort of chips and speeds and feeds are being provided on the back end as long as you are getting the service you have asked for, with the SLA, the uptime, the reliability, and the capabilities you need. All of what sits behind that becomes less and less important.

Think about how you buy electricity. You just expect 110 volts at 60 hertz coming out of the wall, and you expect it to be on all the time. You expect it to be consistent, reliable, and safely delivered to you. How it gets generated, where it gets generated -- whether it’s a wind turbine, a coal-burning plant, a nuclear plant -- that’s not important to you. If it’s produced in one state and transferred to another over the grid, or if it’s produced in your local state, that all becomes less important. What really matters is that you are getting consistent and reliable services you can count on.
And we are seeing the same thing within IT as we move to that service model. The speeds and feeds, the infrastructure, become less important. All of the innovation is now being driven around the as-a-service model and what it takes to provide that service. We innovate at the service level, whether that’s for flex capacity or management services, in a true as-a-service capability.

Gardner: What do your consumer organizations need to think about to be innovative on their side? How can they be in a better position to consume these services such as hybrid IT management-as-a-service, hybrid cloud decision making, and the right mixture of decisions-as-a-service?

What comes next when it comes to how the enterprise IT organization needs to shift?

Business cycles speed IT up 

Vogel: At a business level, within almost every market or every industry, we are moving from what used to be slow-cycle business to standard-cycles. In a lot of cases it’s moving from standard-cycle business to a fast-cycle business. Even businesses that were traditionally slow-cycle or standard-cycle are accelerating. This underlying technology is creating that.

So every company is a technology company. That is becoming more and more true every day. As a result, it’s driving business cycles faster and faster. So, IT, in order to support those business cycles, has to move at that same speed.

And we see enterprises moving away from a traditional IT model when those enterprises’ IT cannot move at the speed the business is demanding. We will still see IT, for example, take six months to provide a platform when the business says, “I need it in 20 minutes.”

We will see a split between traditional IT and a digital innovation group within the enterprise. This group will be owned by the business unit as opposed to core IT.

So, businesses are responding to IT not being able to move fast enough and not being able to provide the responsiveness and the level of service by going out and looking outside and consuming services externally.
At HPE, as we look at some of the services we have announced, they are to help our clients move faster and to provide operational support and management for hybrid to remove that burden from IT so they can focus on the things that accelerate their businesses.

As we move forward, how can clients start to move in this direction? At HPE, as we look at some of the services we have announced and will be rolling out in the next six-12 months, they are to help our clients move faster. They are designed to provide operational support and management for hybrid to take that burden away from IT, especially where IT may not have the skill sets or capability and be able to provide that seamless operating experience to our IT customers. Those customers need to focus on the things that accelerate their business -- that is what the business units are demanding.

To stay relevant, IT is going to have to do that, too. They are going to have to look for help and support so that they can move at the same speed and pace that businesses are demanding today. And I don’t see that slowing down. I don’t think anybody sees that slowing down; if anything, we see the pace continuing to accelerate.

When I talked about fast-cycle -- where services or solutions we put into the market may have had a market shelf life of two to three years -- we are seeing it compressed to six months. It’s amazing how fast competition comes in even if we are doing innovative type of solutions. So, IT has to accelerate at that speed as well.

The HPE GreenLake hybrid cloud offering, for example, gives our clients the ability to operate at that speed by providing managed services capabilities across the hybrid estate. It provides a consistent platform, and then allows them to innovate on top of it. It takes away the management operation from their focus and lets them focus on what matters to the business today, which is innovation.

Gardner: For you personally, Erik, where do you get inspiration for innovation? How do you think out of the box when we can now see that that’s a necessary requirement? 

Inspired by others 

Vogel: One of the best parts about my job is the time I get to spend with our customers and to really understand what their challenges are and what they are doing. One of the things we look at are adjacent businesses.

We try to learn what is working well in retail, for example. What innovation is there and what lessons learned can we apply elsewhere? A lot of times the industry shifts so quickly that we don’t have all of the answers. We can’t take a product-out approach any longer. We really have to start looking at the customers’ back end. And I think having kind of that broad view and looking outside is really helping us. It’s where we are getting a lot of our inspiration.

https://www.hpe.com/us/en/home.html

For example, we are really focused on the overall experience that our clients have with HPE, and trying to drive a very consistent, standardized, easy-to-choose type of experience with us as a company. And it’s interesting as an engineering company, with a lot of good development and engineering capabilities, that we tend to look at it from a product-out view. We build a portal that they can work within, we create better products, and we get that out in front of the customer.

But by looking outside, we are saying, “Wait a minute, what is it, for example, about Uber that everybody likes?” It’s not necessarily that their app is good, but it’s really about the clean car, it’s about not having to pay when you get out of the car, not have to fumble for a credit card. It’s about seeing a map and knowing where the driver is. It’s about a predictable cost, where you know what it’s going to cost. And that experience, that overall experience is what makes Uber, Uber. It’s not just creating an app and saying, “Well, the app is the experience.”

We are learning a lot from adjacent businesses, adjacent industries, and incorporating that into what we are doing. It’s just part of that as-a-service mentality where we have to think about the experience our customers are asking for and how do we start building solutions that meet that experience requirement -- not just the technical requirement. We are very good at that, but how do we start to meet that experience requirement?
How to Develop Hybrid
Cloud Strategies With Confidence
And this has been a real eye-opener for me personally. It has been a really fun part of the job, to look at the experience we are trying to create. How do we think differently? Rather than producing products and putting them out into the market, how do we think about creating that experience first and then designing and creating the solutions that sit underneath it?


When you talk about where we get inspiration, it’s really about looking at those adjacencies. It’s understanding what’s happening in the broader as-a-service market and taking the best of what’s happening and saying, “How can we employ those types of techniques, those tricks, those lessons learned into what we are doing?” And that’s really driving a lot of our development and inspiration in terms of how we are innovating as a company within HPE.

Monday, July 22, 2019

How total deployment intelligence overcomes the growing complexity of multicloud management


The next BriefingsDirect Voice of the Innovator discussion focuses on the growing complexity around multicloud management and how greater accountability is needed to improve business impacts from all-too-common haphazard cloud adoption.

Stay with us to learn how new tools, processes, and methods are bringing insights and actionable analysis that help regain control over the increasing challenges from hybrid cloud and multicloud sprawl.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to explore a more pragmatic path to modern IT deployment management is Harsh Singh, Director of Product Management for Hybrid Cloud Products and Solutions at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving the need for multicloud at all? Why are people choosing multiple clouds and deployments?

Singh
Singh: That’s a very interesting question, especially today. However, you have to step back and think about why people went to the cloud in the first place – and what were the drivers – to understand how sprawl expanded to a multicloud environment.

Initially, when people began moving to public cloud services, the idea was speed, agility, and quick access to resources. IT was in the way for gaining on-premises resources. People said, “Let me get the work going and let me deploy things faster.”

And they were able to quickly launch applications, and this increased their velocity and time-to-market. Cloud helped them get there very fast. However, as we now get choices of multicloud environments, where you have various public cloud environments, you also now have private cloud environments where people can do similar things on-premises. There came a time when people realized, “Oh, certain applications fit in certain places better than others.”

From cloud sprawl to cloud smart

For example, if I want to run a serverless environment, I might want to run in one cloud provider versus another. But if I want to run more machine learning (ML), artificial intelligence (AI) kinds of functionality, I might want to run that somewhere else. And if I have a big data requirement, with a lot of data to crunch, I might want to run that on-premises.

So you now have more choices to make. People are thinking about where’s the best place to run their applications. And that’s where multicloud comes in. However, this doesn’t come for free, right?
How to Determine
Ideal Workload Placement
As you add more cloud environments and different tools, it leads to what we call tool sprawl. You now have people tying all of these tools together trying to figure out the cost of these different environments. Are they in compliance with the various norms we have within our organization? Now it becomes very complex very fast. It becomes a management problem in terms of, “How do I manage all of these environments together?”

Gardner: It’s become too much of a good thing. There are very good reasons to do cloud, hybrid cloud, and multicloud. But there hasn’t been a rationalization about how to go about it in an organizational way that’s in the best interest of the overall business. It seems like a rethinking of how we go about deploying IT in general needs to be part of it.

https://www.hpe.com/us/en/home.html
Singh: Absolutely right. I see three pillars that need to be addressed in terms of looking at this complexity and managing it well. Those are people, process, and technology. Technology exists, but unfortunately, unless you have the right skill set in the people -- and you have the right processes in place -- it’s going to be the Wild West. Everything is just going to be crazy. At the end you falter, not achieving what you really want to achieve.

I look at people, process, and technology as the three pillars of this tool sprawl, which is absolutely necessary for any company as they traverse their multicloud journey.

Gardner: This is a long-term, thorny problem. And it’s probably going to get worse before it gets better.

Singh: I do see it getting worse, but I also see a lot of people beginning to address these problems. Vendors, including we at HPE, are looking at this problem. We are trying to get ahead of it before a lot of enterprises crash and burn. We have experience with our customers, and we have engaged with them to help them on this journey.
You must deploy the applications to the right places that are right for your business -- whether it's multicloud or hybrid cloud. At the end of the day, you have to manage multiple environments.

It is going to get worse and people are going to realize that they need professional help. It requires that we work with these customers very closely and take them along based on what we have experienced together.

Gardner: Are you taking the approach that the solution for hybrid cloud management and multicloud management can be done in the same way? Or are they fundamentally different?

Singh: Fundamentally, it’s the same problem set. You must deploy the applications to the right places that are right for your business -- whether it’s multicloud or hybrid cloud. Sometimes the terminology blurs. But at the end of the day, you have to manage multiple environments.

https://www.hpe.com/us/en/home.html

You may be connecting private or off-premises hybrid clouds, and maybe there are different clouds. The problem will be the same -- you have multiple tools, multiple environments, and the people need training and the processes need to be in place for them to operate properly.

Gardner: What makes me optimistic about the solution is there might be a fourth leg on that stool. People, process, and technology, yes, but I think there is also economics. One of the things that really motivates a business to change is when money is being lost and the business people think there is a way to resolve that.

The economics issue -- about cost overruns and a lack of discipline around procurement – is both a part of the problem and the solution.

Economics elevates visibility

Singh: I am laughing right now because I have talked to so many customers about this.  A CIO from an entertainment media company, for example, recently told me she had a problem. They had a cloud-first strategy, but they didn’t look at the economics piece of it. She didn’t realize, she told me, where their virtual machines (VMs) and workloads were running.

“At the end of the month, I’m seeing hundreds of thousands of dollars in bills. I am being surprised by all of this stuff,” she said. “I don’t even know whether they are in compliance. The overhead of these costs -- I don’t know how to get a handle on it.”

So this is a real problem that customers are facing. I have heard this again and again: They don’t have visibility into the environment. They don’t know what’s being utilized. Sometimes they are underutilized, sometimes they are over utilized. And they don’t know what they are going to end up paying at the end of the day.

https://www.hpe.com/us/en/home.html
A common example is, in a public cloud, people will launch a very large number of VMs because that’s what they are used to doing. But they consume maybe 10 to 20 percent of that. What they don’t realize is that they are paying for the whole bill. More visibility is going to become key to getting a handle on the economics of these things.

Gardner: We have seen these kinds of problems before in general business procurement. Many times it’s the Wild West, but then they bring it under control. Then they can negotiate better rates as they combine services and look for redundancies. But you can’t do that until you know what you’re using and how it costs.

So, is the first step getting an inventory of where your cloud deployments are, what the true costs are, and then start to rationalize them?

Guardrails reduce risk, increase innovation

Singh: Absolutely, right. That’s where you start, and at HPE we have services to do that. The first thing is to understand where you are. Get a base level of what is on-premises, what is off-premises, and which applications are required to run where. What’s the footprint that I require in these different places? What is the overall cost I’m incurring, and where do I want to be? Answering those questions is the first step to getting a mixed environment you can control -- and get away from the Wild West.

Put in the compliance guardrails so that IT is again looking at avoiding the problems we are seeing today.

Gardner: As a counterpoint, I don’t think that IT wants to be perceived as the big bad killjoy that comes to the data scientists and says, “You can’t get those clusters to support the data environment that you want.” So how do you balance that need for governance, security, and cost control with not stifling innovation and allowing creative freedom?
How to Transform
The Traditional Datacenter
Singh: That’s a very good question. When we started building out our managed cloud solutions, a key criterion was to provide the guardrails yet not stifle innovation for the line of business managers and developers. The way you do that is that you don’t become the man in the middle. The idea is you allow the line of businesses and developers to access the resources they need. However, you put guardrails around which resources they can access, how much they can access, and you provide visibility into the budgets. You still let them access the direct APIs of the different multicloud environments.

You don’t say, “Hey, you have to put in a request to us to do these things.” You have to be more behind-the-scenes, hidden from view. At the same time, you need to provide those budgets and those controls. Then they can perform their tasks at the speed they want and access to the resources that they need -- but within the guardrails, compliance, and the business requirements that IT has.

Gardner: Now that HPE has been on the vanguard of creating the tools and methods to get the necessary insights, make the measurements, recognize the need for balance between control and innovation -- have you noticed changes in organizational patterns? Are there now centers of cloud excellence or cloud-management bureaus? Does there need to be a counterpart to the tools, of management structure changes as well?

Automate, yet hold hands, too

Singh: This is the process and the people parts that you want to address. How do you align your organizations, and what are the things that you need to do there? Some of our customers are beginning to make those changes, but organizations are difficult to change to get on this journey. Some of them are early; some of them are at much later stage. A lot of the customers frankly are still in the early phases of multicloud and hybrid cloud. We are working with them to make sure they understand the changes they’ll need to make in order to function properly in this new world.


Gardner: Unfortunately, these new requirements come at a time when cloud management skills -- of understanding data and ops, IT and ops, and cloud and ops -- are hard to find and harder to keep. So one of the things I’m seeing is the adoption of automation around guidance, strategy, and analysis. The systems start to do more for you. Tell me how automation is coming to bear on some of these problems, and perhaps mitigate the skill shortage issues.

Singh: The tools can only do so much. So you automate. You make sure the infrastructure is automated. You make sure your access to public cloud -- or any other cloud environment -- is automated.

That can mitigate some of the problems, but I still see a need for hand-holding from time to time in terms of the process and people. That will still be required. Automation will help tie in a storage network, and compute, and you can put all of that together. This [composability] reduces the need and dependency on some of the process and people. Automation mitigates the physical labor and the need for someone to take days to do it. However, you need that expertise to understand what needs to be done. And this is where HPE is helping.
Automation will help tie in a storage network and compute, and you can put all of that together. Composability reduces the need and dependency on some of the process and the people. Automation mitigates the physical labor and the need for someone to take days to do it.

You might have heard about our HPE GreenLake managed cloud services offerings. We are moving toward an as-a-service model for a lot of our software and tooling. We are using the automation to help customers fill the expertise gap. We can offer more of a managed service by using automation tools underneath it to make our tasks easier. At the end of the day, the customer only sees an outcome or an experience -- versus worrying about the details of how these things work.

Gardner: Let’s get back to the problem of multicloud management. Why can't you just use the tools that the cloud providers themselves provide? Maybe you might have deployments across multiple clouds, but why can’t you use the tools from one to manage more? Why do we need a neutral third-party position for this?

Singh: Take a hypothetical case: I have deployments in Amazon Web Services (AWS) and I have deployments in Google Cloud Platform (GCP). And to make things more complicated, I have some workloads on premises as well. How would I go about tying these things together?

Now, if I go to AWS, they are very, very opinionated on AWS services. They have no interest in looking at builds coming out of GCP or Microsoft Azure. They are focused on their services and what they are delivering. The reality is, however, that customers are using these different environments for different things.

The multiple public cloud providers don’t have an interest in managing other clouds or to look at other environments. So third parties come in to tie everything together, and no one customer is locked into one environment.

https://www.hpe.com/us/en/home.html

If they go to AWS, for example, they can only look at billing, services, and performance metrics of that one service. And they do a very good job. Each one of these cloud guys does a very good job of exposing their own services and providing you visibility into their own services. But they don’t tie it across multiple environments. And especially if you throw the on-premises piece into the mix, it’s very difficult to look at and compare costs across these multiple environments.

Gardner: When we talk about on-premises, we are not just talking about the difference between your data center and a cloud provider’s data center. We are also taking about the difference between a traditional IT environment and the IT management tools that came out of that. How has HPE crossed the chasm between a traditional IT management automation and composability types of benefits and the higher-level, multicloud management?

Tying worlds together

Singh: It’s a struggle to tie these worlds together from my experience, and I have been doing this for some time. I have seen customers spend months and sometimes years, putting together a solution from various vendors, tying them together, and deploying something on premises and also trying to tie that to an off-premises environment.

At HPE, we fundamentally changed how on-premises and off-premises environments are managed by introducing our own SaaS management environment, which customers do not have to manage. Such a Software as a Service (SaaS) environment, a portal, connects on-premises environments. Since we have a native, programmable, API-driven infrastructure, we were able to connect that. And being able to drive it from the cloud itself made it very easy to hook up to other cloud providers like AWS, Azure, and GCP. This capability ties the two worlds together. As you build out the tools, the key is understanding automation on the infrastructure piece, and how can you connect and manage this from a centralized portal that ties all these things together with a click.

Through this common portal, people can onboard their multicloud environments, get visibility into their costs, get visibility into compliance -- look at whether they are HIPAA compliant or not, PCI compliant or not -- and get access to resources that allow them to begin to manage these environments.
How to Better Manage
Hybrid and Multicloud Economics
For example, onboarding into any public cloud is very, very complex. Setting up a private cloud is very complex. But today, with the software that we are building, and some of our customers are using, we can set up a private cloud environment for people within hours. All you have to do is connect with our tools like HPE OneView and other things that we have built for the infrastructure and automation pieces. You then tie that together to a public cloud-facing tenant portal and onboard that with a few clicks. We can connect with their public cloud accounts and give them visibility into their complete environment.

And then we can bring in cost analytics. We have consumption analytics as part of our HPE GreenLake offering, which allows us to look at cost for on-premises as well as off-premises resources. You can get a dashboard that shows you what you are consuming and where.

Gardner: That level of management and the capability to be distributed across all these different deployment models strikes me as a gift that could keep on giving. Once you have accomplished this and get control over your costs, you are next able to rationalize what cloud providers to use for which types of workloads. It strikes me that you can then also use that same management and insight to start to actually move things around based on a dynamic or even algorithmic basis. You can get cost optimization on the fly. You can react to market forces and dynamics in terms of demand on your servers or on your virtual machines anywhere.

Are you going to be able to accelerate the capability for people to move their fungible workloads across different clouds, both hybrid and multicloud?

Optimizing for the future

Singh: Yes, absolutely right. There is more complexity in terms of moving workloads here and there, because there is data proximity requirements and various other requirements. But the optimization piece is absolutely something we can do on the fly, especially if you start throwing AI into the mix.

https://www.hpe.com/us/en/home.html

You will be learning over time what needs to be deployed where, and where your data gravity might be, and where you need applications closer to the data. Sometimes it’s here, sometimes it’s there. You might have edge environments that you might want to manage from this common portal, too. All that can be brought together.

And then with those insights, you can make optimization decisions: “Hey, this application is best deployed in this location for these reasons.” You can even automate that. You can make that policy-driven.

Think about it this way -- you are a person who wants to deploy something. You request a resource, and that gets deployed for you based on the algorithm that has already decided where the optimal place to put it is. All of that works behind the scenes without you having to really think about it. That’s the world we are headed to.

Gardner: We have talked about some really interesting subjects at a high level, even some thought leadership involved. But are there any concrete examples that illustrate how companies are already starting to do this? What kinds of benefits do they get?

Singh: I won’t name the company, but there was a business in the UK that was able to deploy VMs within minutes on their on-premises environment, as well as gain cost benefits out of their AWS deployments.
We were able to go in, connect to their VMware environment, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs. They saved 40 percent in operational efficiency. They gained self-service access.

We were able to go in, connect to their VMware environment, in this case, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs and request resources in that environment. They saved 40 percent in operational efficiency. So now they were mostly cost optimized, their IT team was less pressured to go and launch VMs for their developers, and they gained direct self-service access through which they could go and deploy VMs and other resources on-premises.

At the same time, IT had the visibility into what was being deployed in the public cloud environments. They could then optimize those environments for the size of the VMs and assets they were running there and gain some cost advantages there as well.
How to Solve Cost and Utilization
Challenges of Hybrid Cloud
Gardner: For organizations that recognize they have a sprawl problem when it comes to cloud, that their costs are not being optimized, but that they are still needing to go about this at sort of a crawl, walk, run level -- what should they be doing to put themselves in an advantageous position to be able to take advantage of these tools?

Are there any precursor activities that companies should be thinking about to get control over their clouds, and then be able to better leverage these tools when the time comes?

Watch your clouds

Singh: Start with visibility. You need an inventory of what you are doing. And then you need to ask the question, “Why?” What benefit are you getting from these different environments? Ask that question, and then begin to optimize. I am sure there are very good reasons for using multicloud environments, and many customers do. I have seen many customers use it, and for the right reasons.

However, there are other people who have struggled because there was no governance and guardrails around this. There were no processes in place. They truly got into a sprawled environment, and they didn’t know what they didn’t know.

So first and foremost, get an idea of what you want to do and where you are today -- get a baseline. And then, understand the impact and what are the levers to the cost. What are the drivers to the efficiencies? Make sure you understand the people and process -- more than the technology, because the technology does exist, but you need to make sure that your people and process are aligned.

And then lastly, call me. My phone is open. I am happy to have a talk with any customer that wants to have a talk.
How to Achieve Composability
Across Your Datacenter
Gardner: On that note of the personal approach, people who are passionate in an organization around things like efficiency and cost control are looking for innovation. Where do you see the innovation taking place for cloud management? Is it the IT Ops people, the finance people, maybe procurement? Where is the innovative thinking around cloud sprawl manifesting itself?

Singh: All three are good places for innovation. I see IT Ops at the center of the innovation. They are the ones who will be effecting change.


Finance and procurement, they could benefit from these changes, and they could be drivers of the requirements. They are going to be saying, ‘I need to do this differently because it doesn’t work for me.” And the innovation also comes from developers and line of businesses managers who have been doing this for a while and who understand what they really need.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: