Friday, August 17, 2018

New strategies emerge to stem the costly downside of complex cloud choices

The next BriefingsDirect hybrid IT management strategies interview explores how jerry-rigged approaches to cloud adoption at many organizations have spawned complexity amid spiraling -- and even unknown -- costs.

We’ll hear now from an IT industry analyst about what causes unwieldy cloud use, and how new tools, processes, and methods are bringing insights and actionable analysis to regain control over hybrid IT sprawl.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore new breeds of hybrid and multicloud management solutions is Rhett Dillingham, Vice President and Senior Analyst at Moor Insights and Strategy. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What makes hybrid and multicloud adoption so complex?

Dillingham: Regardless of how an enterprise has invested in public and private cloud use for the last decade, a lot of them ended up in a similar situation. They have a footprint on at least one or multiple public clouds. This is in addition to their private infrastructure, in whatever degree that private infrastructure has been cloud-enabled and turned into a cloud API-available infrastructure to their developers.

They have this footprint then across the hybrid infrastructure and multiple public clouds. Therefore, they need to decide how they are going to orchestrate on those various infrastructures -- and how they are going to manage in terms of control costs, security, and compliance. They are operating cloud-by-cloud, versus operating as a consolidated group of infrastructures that use common tooling. This is the real wrestling point for a lot of them, regardless of how they got here.

Gardner: Where are we in this as an evolution? Are things going to get worse before they get better in terms of these levels of complexity and heterogeneity?

Dillingham: We’re now at the point where this is so commonly recognized that we are well into the late majority of adopters of public cloud. The vast majority of the market is in this situation. We’re going to get worse from an enterprise market perspective.

We are also at the inflection point of requiring orchestration tooling, particularly with the advent of containers. Container orchestration is getting more mature in a way that is ready for broad adoption and trust by enterprises, so they can make bets on that technology and the platforms based on them.

Control issues 

On the control side, we’re still in the process of sorting out the tooling. You have a number of vendors innovating in the space, and there have been a number of startup efforts. Now, we’re seeing more of the historical infrastructure providers invest in the software capabilities and turning those into services -- whether it’s Hewlett Packard Enterprise (HPE), VMware, or Cisco, they are all making serious investments into the control aspect of hybrid IT. That’s because their value is private cloud but extends to public cloud with the same need for control.

Gardner: You mentioned containers, and they provide a common denominator approach so that you can apply them across different clouds, with less arduous and specific work than deploying without containerization. The attractiveness of containers comes because the private cloud people aren’t going to help you deal with your public cloud deployment issues. And the public clouds aren’t necessarily going to help you deal with other public clouds or private clouds. Is that why containers are so popular?
Learn More About
HPE OneSphere
Dillingham: If you go back to the fundamental basis of adoption of cloud and the value proposition, it was first and foremost about agility -- more so than cost efficiency. Containers are a way of extending that value, and getting much deeper into speed of development, time to market, and for innovation and experimentation.

Containerization is an improvement geared around that agility value that furthers cloud adoption. It is not a stark difference from virtual machines (VMs), in the sense of how the vendors support and view it.

So, I think a different angle on that would be that the use of VMs in public cloud was step one, containers was a significant step two that comes with an improved path to the agility and speed value. The value the vendor ecosystem is bringing with the platforms -- and how that works in a portable way across hybrid infrastructures and multi-cloud -- is more easily delivered with containers.

There’s going to be an enterprise world where orchestration runs specific to cloud infrastructure, public versus private, but different on various public clouds. And then there is going to be more commonality with containers by virtue of the Kubernetes project and Cloud Native Computing Foundation (CNCF) portfolio.

That’s going to deliver for new applications -- and those lifted and shifted into containers -- much more seamless use across these hybrid infrastructures, at least from the control perspective.

Gardner: We seem to be at a point where the number of cloud options has outstripped the ability to manage them. In a sense, the cart is in front of the horse; the horse being hybrid cloud management. But we are beginning to see more such management come to the fore. What does this mean in terms of previous approaches to management?

In other words, a lot of organizations already have management for solving a variety of systems heterogeneity issues. How should the new forms of management for cloud have a relationship with these older management tools for legacy IT?

Dillingham: That is a big question for enterprises. How much can they extend their existing toolsets to public cloud?

A lot of the vendors from the private [infrastructure] sector invested in delivering new management capabilities, but that isn’t where many started. I think the rush to adoption of public cloud -- and the focus on agility over cost-efficiency -- has driven a predominance of the culture of, “We are going to provide visibility and report and guide, but we are not going to control because of the business value of that agility.”
The tools have grown up as a delivery on visibility but not the control of the typical enterprise private infrastructure approach, which is set up for a disruptive orientation to the software and not continuity.

And the tools have grown up as a delivery on that visibility, versus the control of the typical enterprise private infrastructure approach, which is set up for a disruptive orientation to the software and not continuity. That is an advantage to vendors in those different spheres. I see that continuing.

Gardner: You mentioned both agility and cost as motivators for going to hybrid cloud, but do we get to the point where the complexity and heterogeneity spawn a lack of insight and control? Do we get to the point where we are no longer increasing agility? And that means we are probably not getting our best costs either.

Are we at a point where the complexity is subverting our agility and our ability to have predictable total costs?

Growing up in the cloud 

Dillingham: We are still a long away from maturity in effective use of cloud infrastructure. We are still at a point where just understanding what is optimal is pretty difficult across the various purchase and consumption options of public cloud by provider and in comparing that to an accurate cost model for private infrastructure. So, the tooling needs to be in place to support this.

There has been a lot of discussion recently about HPE OneSphere from Hewlett Packard Enterprise, where they have invested in delivering some of this comparability and the analytics to enable better decision-making. I see a lot of innovation in that space -- and that’s just the tooling.

There is also the management of the services, where the cloud managed service provider market is continuing to develop beyond just a brokering orientation. There is more value now in optimizing an enterprise’s footprint across various cloud infrastructures on the basis of optimal agility. And also creating value from services that can differentiate among different infrastructures – be it Amazon Web Services (AWS) versus Azure, and Google, and so forth – and provide the cost comparisons.

Gardner: Given that it’s important to show automation and ongoing IT productivity, are these new management tools including new levels of analytics, maybe even predictive insights, into how workloads and data can best become fungible -- and moved across different clouds -- based on the right performance and/or cost metrics?

Is that part of the attractiveness to a multi- and cross-cloud management capability? Does hybrid cloud management become a slippery slope toward impressive analytics and/or performance-oriented automation?

Dillingham: We’ve had investment in the tooling from the cloud providers, the software providers, and the infrastructure providers. Yet the insights have come more from the professional services’ realm than they have from the tooling realm. That’s provided a feedback loop that can now be applied across hybrid- and multi-cloud in a way that hasn’t come from the public cloud provider tools themselves.
Learn More About
HPE OneSphere
So, where I see the most innovation is from the providers that are trying to address multi-cloud environments and best feed innovation from their customer engagements from professional services. I like the opportunity HPE has to benefit from their acquisitions of Cloud Technology Partners and RedPixie, and then feeding those insights back into [product development]. I’ve seen a lot of examples about the work they’re doing in HPE OneSphere in moving those insights into action for customers through analytics.

Gardner: I was also thinking about the Nimble acquisition, and with InfoSight, and the opportunity for that intellectual property to come to bear on this, too.

Dillingham: Yes, which is really harvesting the value of the control and insights of the private infrastructure and the software-defined orientation of private infrastructure in comparison to the public cloud options.

Gardner: Tell us about Rhett Dillingham. You haven’t been an IT industry analyst forever. Please tell us a bit about your background.

Dillingham: I’ve been a longtime product management leader. I started in hardware, at AMD, and moved into software. Before the cloud days, I was at Microsoft. Next I was building out the early capabilities at AWS, such as Elastic Compute Cloud (EC2) and Elastic Block Store (EBS). Then I went into a portfolio of services at Rackspace, building those out at the platform level and the overall Rackspace public cloud. As the value of OpenStack matured into private use, I worked with a number of enterprises on private OpenStack cloud deployments.

As an analyst, I support project management-oriented, consultative, and go-to-market positioning of our clients.

Gardner: Let’s dwell on the product management side for a bit. Given that the market is still immature, given what you know customers are seeking for a hybrid IT end-state, what should vendors such as HPE be doing in order to put together the right set of functions, processes, and simplicity -- and ultimately, analytics and automation -- to solve the mess among cloud adoption patterns and sprawl?

Clean up the cloud mess 

Dillingham: We talked about automation and orchestration, talked about control of cost, security, and compliance. I think that there is a tooling and services spectrum to be delivered on those. The third element that needs to be brought into the process is the control structure of each enterprise, of what their strategy is across the different infrastructures.

Where are they optimizing on cost based on what they can do in private infrastructure? Where are they setting up decision processes? What incremental services should be adopted? What incremental clouds should be adopted, such as what an Oracle and an IBM are positioning their cloud offerings to be for adoption beyond what’s already been adopted by a client in AWS, Google, and Azure?
The third element that needs to be brought into the process is the control structure of each enterprise, of what their strategy is across the different infrastructures.

I think there’s a synergy to be had across those needs. This spans from the software and services tooling, into the services and managed services, and in some cases when the enterprise is looking for an operational partner.

Gardner: One of the things that I struggle with, Rhett, is not just the process, the technology and the opportunity, but the people. Who in a typical enterprise IT organization should be tasked with such hybrid IT oversight and management? It involves more than just IT.

To me, it’s economics, it’s procurement, it’s contracts. It involves a bit more than red light, green light … on speed. Tell me about who or how organizations need to change to get the right people in charge of these new tools.

Who’s in charge?

Dillingham: More than the individuals, I think this is about the recognition of the need for partnerships between the business units, the development organizations, and the operational IT organization’s arm of the enterprise.

The focus on agility for business value had a lot of the cloud adoption led by the business units and the application development organizations. As the focus on maturity mixes in the control across security and compliance, those are traditional realms of the IT operational organization.

Now there’s the need for decision structure around sourcing -- where how they value incremental capabilities from more clouds and cloud providers is a decision of tradeoffs and complexity. As you were mentioning, of weighing between the incremental value of an additional provider and an incremental service, and portability across those.

What I am seeing in the most mature setups are partnerships across the orientations of those organizations. That includes the acknowledgment and reconciliation of those tradeoffs in long-term portability of applications across infrastructures – against the value of adoption of proprietary capabilities, such as deeper cognitive machine learning (ML) automation and Internet of Things (IoT) capabilities, which are some of the drivers of the more specific public cloud platform uses.

Gardner: So with adopting cloud, you need to think about the organizational implications and refactor how your business operates. This is not just bolting on a cloud capability. You have to rethink how you are doing business across the board in order to take full advantage.

Dillingham: There is wide recognition of that theme. It gets into the nuts and bolts as you adopt a platform and you determine exactly how the operations function and roles are going to be defined. It means determining who is going to handle what, such as how much you are going to empower developers to do things themselves. With the accountability that results, more tradeoffs are there for them in their roles. But it's almost over-rotation and focus to that out of recognition of it and lack of valuation of that more senior-level decision making in what their cloud strategy is.
Learn More About
HPE OneSphere
I hear a lot of cloud strategies that are as simple as, “Yes, we are allowing and empowering adoption of cloud by our development teams,” without the second-level recognition of the need to have a strategy for what the guidelines are for that adoption – not in the sense of just controlling costs, but in the sense of: How do you view the value of long-term portability? How do you value strategic sourcing and the ability to negotiate across these providers long-term with evidence and demonstrable portability of your application portfolio?

Gardner: In order to make those proper calls on where you want to go with cloud and to what degree, across which provider, organizations like HPE are coming up with new tools.

So we have heard about HPE OneSphere. We are now seeing HPE’s GreenLake Hybrid Cloud, which is a use of HPE OneSphere management as a service. Is that the way to go? Should we think of cloud management oversight and optimization as a set of services, rather than a product or a tool? It seems to me that a set of services, with an ecosystem behind them, is pretty powerful.

A three-layer cloud 

Dillingham: I think there are three layers to that. One is the tool, whether that is consumed as software or as a service.

Second is the professional consultative services around that, to the degree that you as an enterprise need help getting up to speed in how your organization needs to adjust to benefit from the tools and the capabilities the tools are wrangling.

And then third is a decision on whether you need an operational partner from a managed service provider perspective, and that's where HPE is stepping up and saying we will handle all three of these. We will deliver your tools in various consumption models on through to a software-as-a-service (SaaS) delivery model, for example, with HPE OneSphere. And we will operate the services for you beyond that SaaS control portal into your infrastructure management, across a hybrid footprint, with the HPE GreenLake Hybrid Cloud offering. It is very compelling.
HPE is stepping up with OneSphere and saying they will handle delivery of tools, SaaS models, and managed cloud services -- all through a control portal.

Gardner: With so many moving parts, it seems that we need certain things to converge, which is always tricky. So to use the analogy of properly intercepting a hockey puck, the skater is the vendor trying to provide these services, the hockey puck is the end-user organization that has complexity problems, and the ice is a wide-open market. We would like to have them all come together productively at some point in the future.

We have talked about the vendors; we understand the market pretty well. But what should the end-user organizations be starting to do and think in order for them to be prepared to take advantage of these tools? What should be happening inside your development, your DevOps, and that larger overview of process and organization in order to say, “Okay, we’re going to take advantage of that hockey player when they are ready, so that we can really come together and be proficient as a cloud-first organization?”

Commit to an action plan

Dillingham: You need to have a plan in place for each element we have talked about. There needs to be a plan in place for how you are maturing your toolset in cloud-native development… how you are supporting that on the development side from a continuousintegration (CI) and continuous delivery (CD) perspective; how you are reconciling that with the operational toolset and the culture of operating in a DevOps model with whatever degree of iterative development you want to enable.

Is the tooling in place from an orchestration and development capability and operations perspective, which can be containers or not? And that gets into container orchestration and the cloud management platforms. There is the control aspect. What tooling you are going to apply there, how you are going to consume that, and how much you want to provide it as a consultative offer? And then how much do you want those options managed for you by an operational partner? And then how you are going to set up your decision-making structure internally?

Every element of that is where you need to be maturing your capabilities. A lot of the starting baseline for the consultative value of a professional services partner is walking you through the decision-making that is common to every organization on each of those fronts, and then enabling a deep discussion of where you want to be in 3, 5, or 10 years, and deciding proactively.

More importantly than anything, what is the goal? There is a lot of oversimplification of what the goal is – such as adoption of cloud and picking of best-of-breed tools -- without a vision yet for where you want the organization to be and how much it benefits from the agility and speed value, and the cost efficiency opportunity.

Gardner: It’s clear that those organizations that can take that holistic view, that have the long-term picture in mind, and can actually execute on it, have a significant advantage in whatever market they are in. Is that fair?
Learn More About
HPE OneSphere
Dillingham: It is. And one thing that I think we tend to gloss over -- but does exist -- is a dynamic where some of the decision-makers are not necessarily incentivized to think and consider these options on a long-term basis.

The folks who are in role, often for one to three years before moving to a different role or a different enterprise, are going to consider these options differently than someone who has been in role for 5 or 10 years and intends to be there through this full cycle and outcome. I see those decisions made differently, and I think sometimes the executives watching this transpire are missing that dynamic and allowing some decisions to be made that are more short-term oriented than long-term.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Wednesday, August 15, 2018

Huge waste in public cloud spend sets stage for next wave of total cloud governance solutions, says 451's Fellows

IT architects and operators face an increasingly complex mix of identifying and automating for both the best performance and the best price points across their cloud options.

The modern IT services procurement task is made more difficult by the vast choices public cloud providers offer -- literally, hundreds of thousands of service options.

New tools to help optimize cloud economics are arriving, but in the meantime, unchecked waste is rampant across the total spend for cloud computing, research shows.

The next BriefingsDirect Voice of the Analyst hybrid IT and multicloud management discussion explores the causes of unwieldy cloud use and how new tools, processes, and methods are bringing insights and actionable analysis to gain control over hybrid IT sprawl.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to help explain the latest breed of cloud governance solutions is William Fellows, Founder and Research Vice President at 451 Research. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How much waste is really out there when it comes to enterprises buying and using public cloud services?

Fellows: Well, a lot -- and it’s growing daily. Specifically this is because buyers are now spending thousands, tens of thousands, and even in some cases, millions of dollars a month on their cloud services. So, the amount of waste goes up as the bill goes up.

As anyone who works in the field can tell you, by using some cost optimization and resource optimization tools you can save the average organization about 30 percent of the cost on their monthly bill.

If your monthly bill is a $100, that’s one amount, but if your monthly bill is a million dollars, then that’s another amount. That’s the kind of wastage in terms of percentage terms that is being seen out there.

What we are really talking about here is the process and how it comes to be that there is such a waste of cloud resources. These are driven by things that can be done fairly easily in terms of better managing to rein in these wasteful charges.

Gardner: What are the top reasons for this lack of efficiency and optimization? Are these just growing pains, that people adopted cloud so rapidly that they lost control over it? Or is there more to it?

Fellows: There are a couple of reasons. At a high level, there is massive organizational dysfunction around cloud and IT. This is driven primarily because cloud as we know is usually via decentralized purchases at large organizations. That means there is often a variety of different groups and departments using cloud. There is no single, central, and logical way of controlling cost.

Secondly there is the sheer number of available services, and the resulting complexity of trying to deal with all of the different nuances with regard to different image sizes, on keeping taps on who is doing what, and so on. That also underpins this resource wastage.

There isn’t one single reason. And, quite frankly, these things are moving forward so quickly that some users want to get on to the next service advance before they are used to using what they already have.

For organizations fearful of runaway costs, this amounts to a drunken sailor effect, where an individual group within an organization just starts using cloud services without regard to any kind of cost-management or economic insight.
Learn More About
HPE OneSphere
In those cases, cloud costs can spiral dramatically. That, of course, is the fear for the chief information officer (CIO), especially as they are trying to build a business case for accelerating the conversion to cloud at an organization.

Yet the actual mechanisms by which organizations are able to better control and eliminate waste are fairly simple. Even Amazon Web Services (AWS) has a mantra on this: Simply turn things off when they are no longer needed. Make sure you are using the right size of instance, for example, for what you are trying to achieve, and make sure that you work with tools that can turn things off as well as turn things on. In other words, employ services that are flexible.

Gardner: We are also seeing more organizations using multiple clouds in multiple ways. So even if AWS, for example, gives you more insight and clarity into your spend with them, and allows you to know better when to turn things off -- that doesn’t carry across the hybrid environment people are facing. The complexity is ramping up at the same time as spiraling costs.

If there were 30 percent waste occurring in other aspects of the enterprise, the chief financial officer (CFO) would probably get involved. The chief procurement officer (CPO) would be called in to do some centralized purchasing here, right?

Why don’t we see the business side of these enterprises come in and take over when it comes to fixing this cloud use waste problem?

It’s costly not to track cloud costs

Fellows: You are right. In defense of the hyperscale cloud providers, they are now doing a much better job of providing tools for doing cost reporting on their services. But of course, they are only interested in really managing the cost on their own services and not on third-party services. As we transition to a hybrid world, and multicloud, those approaches are deficient.

There has recently been a consolidation around the cloud cost reporting and monitoring technologies, leading to the next wave of more forensic resource optimization services, to gain the ability to do this over multiple cloud services.

Coming back to why this isn’t managed centrally, it’s because much of the use and purchasing is so decentralized. There is no single version of the economic truth, if you like, that’s being used to plan, manage, and budget.

For most organizations, they have one foot in the new world and still a foot in the old world. They are working in old procurement models, in the old ways of accounting, budgeting, and cost reporting, which are unlikely to work in a cloud context.
Cloud isn't managed centrally because much of the use and purchasing is so decentralized. There is no single version of the economics truth being used to plan, manage, and budget.

That’s why we are seeing the rise of new approaches. Collectively these things were called cloud management services or cloud management platforms, but the language the industry is using now is cloud governance. And that implies that it’s not only the optimization of resources, infrastructure, and absent workloads -- but it’s also governance in terms of the economics and the cost. And it’s governance when it comes to security and compliance as well.

Again, this is needed because enterprises want a verifiable return on investment (ROI), they do want to control these costs. Economics is important, but it’s not the only factor. It’s only one dimension of the problem they face in this conversion to cloud.

Gardner: It seems to me that this problem needs to be solved if the waste continues to grow, and if decentralization proves to be a disadvantage over time. It behooves the cloud providers, the enterprises, and certainly the IT organizations to get control over this. The economics is, as you say, a big part -- not the only part -- but certainly worth focusing on.

Tell me why you have created at 451 Research a Digital Economics Unit and the 451 Cloud Price Index. Do you hope to accelerate movement toward a solution to this waste problem?

Carry a cost-efficient basket

Fellows: Yes, thanks for bringing that into the interview. I created the Digital Economics Unit at 451 about five years ago. We produce a range of pricing indicators that help end-users and vendors understand the cost of doing things in different kinds of hosted environments. The first set of indicators are around cloud. So, the Cloud Price Index acts like a Consumer Price Index, which measures the cost of a basket of consumer goods and services over time.

The Cloud Price Index measures the cost of a basket of cloud goods and services over time to determine where the prices are going. Of course, five years ago we were just at the beginning of the enormous interest in the relative costs of doing things within AWS versus Azure versus Google, or another places as firms added services.

We’ve assembled a basket of cloud goods and services and priced that in the market. It provides a real average price per basket of goods. We do that by public cloud, and we do it by private cloud. We do it by commercial code, such as Microsoft and others, as well as via open source offerings such as OpenStack. And we do it across global regions.

That has been used by enterprises to understand whether they are getting a good deal from their suppliers, or whether they are paying over the market rates. For vendors, obviously, this helps them with their pricing and packaging strategies.

In the early days, we saw a big shift [downward] in cloud pricing as the vendors introduced new basic infrastructure services. Recently this has fallen off. Although cloud prices are falling, they are coming down less.
Learn More About
HPE OneSphere
I just checked and the basket of goods that we use has fallen this year by about 4 percent in the US. You can expect Europe and Asia-Pac to still pay, for example, a premium of 10 and 25 percent more respectively for the same cloud services in those regions.

We also provide insight into about a dozen services in those baskets of cloud goods, so not only compute but storage, networking, SQL and non-SQL bandwidth, and all kinds of other things.

Now, if you were to choose the provider that offers the cheapest services in each of those -- and you did that across the full basket of goods -- you would actually make a savings of 75 percent on the market costs of that basket. It shows that there is an awful lot of headroom in the market in terms of pricing.

Gardner: Let me make sure I understand what that 75 percent represents. That means if you had clarity, and you were able to shop with full optimization on price, you could reduce your cloud bill by 75 percent. Is that right?

Fellows: Correct, yes. If you were to choose the cheapest provider of each one of those services, you would save yourself 75 percent of the cost over the average market price.

Gardner: Well, that’s massive. That’s just massive.

Opportunity abounds in cloud space

Fellows: Yes, but by the same token, no one is doing that because it’s way too complex and there is nothing in the market available that allows someone to do that, let alone manage that kind of complexity. The key is that it shows there is a great deal of opportunity and room for innovation in this space.

We feel at 451 Research that the price of cloud compute services may go down further. I think it’s unlikely to reach zero, but what’s much more important now is determining the cost of using basic cloud across all of the vendors as quickly as we can because they are now adding higher-value services on top of the basic infrastructure.

The game is now beyond infrastructure. That’s why we have added 16 managed services to the Cloud Price Index of cloud services. With this you can see what you could expect to be paying in the market for those different services, and by different regions. This is the new battleground and the new opportunity for service providers.

Gardner: Clearly 451 Research has identified a big opportunity for cloud spend improvement. But what’s preventing IT people from doing more on costs? Why does it so difficult to get a handle on the number of cloud services? And what needs to happen next for companies to be able to execute once they have gained more visibility?

Fellows: You are right. One of the things we like to do with the Cloud Price Index is to ask folks, “Just how many different things do you think you can buy from the hyperscale vendors now?” The answer as of last week was more than 500,000 -- there are more than 500,000 SKUs available from AWS, Azure, and Google right now.

How can any human keep up with understanding what combination of these things might be most useful within their organization?

The second wave

You need more than a degree in cloud economics to be able to figure that out. And that’s why I talked earlier about a second wave of cloud cost management tools now coming into view. Specifically, these are around resource optimization, and they deliver a forensic view. This is more than just looking at your monthly bill; this is in real time looking at how the services are performing and then recommending actions on that basis to optimize their use from an economic point of view.

Some of these are already beginning to employ more automation based on machine learning (ML). So, the tools themselves can learn what’s going on and make decisions based upon those.
A second wave of cloud cost management tools is coming into view around resource optimization, and they deliver a forensic view.

There is a whole raft of vendors we are covering within our research here. I fully expect that like the initial wave of cloud-cost-reporting tools that have largely been acquired, that these newest tools will probably go the same way. This is because the IT vendors are trying to build out end-to-end cloud governance portfolios, and they are going to need this kind of introspection and optimization as part of their offerings.

Gardner: As we have seen in IT in the past, oftentimes we have new problems, but they have a lot in common with similar waves of problems and solutions from years before. For example, there used to be a lot of difficulty knowing what you had inside of your own internal data centers. IT vendors came to the rescue with IT management tools, agent-based, agentless, crawling across the network, finding all the devices, recognizing certain platforms, and then creating a map, if you will.

So we have been through this before, William. We have seen how IT management has created the means technically to support centralization, management, and governance over complexity and sprawl. Are the same vendors who were behind IT management traditionally now extending their capabilities to the cloud? And who might be some of the top players that are able to do that?

Return of the incumbents

Fellows: You make a very relevant point because although it has taken them some time, the incumbents, the systems management vendors, are rearchitecting, reengineering. And either by organic, in-house development, by partnership, or by acquisition, they are extending and remodeling their environments for the cloud opportunity.

Many of them have now assembled real and meaningful portfolios, whether that’s Cisco, BMC, CA, HPE, or IBM, and so on. Most of these folks now have a good set of tools for doing this, but it has taken them a long time.

Sometimes some of these firms don’t need to do anything for a number of years and they can still come out on top of this market. One of the questions is whether there is room for long-term, profitable, growing, independent firms in this area. That remains to be seen.

The most likely candidates are not necessarily the independent software vendors (ISVs). We might think about RightScale as being one of the longest serving folks in the market. But, instead, I believe it will be solved by the managed service providers (MSPs).

These are the folks providing ways for enterprises to achieve a meaningful conversion to cloud and to multiple cloud services. In order to be able to do that, of course, they need to manage all those resources in a logical way.

There is a new breed of MSPs coming to the market that are essentially born in the cloud, or cloud-native, in their approach -- rather than the incumbent vendors, who have bolted this [new set of capabilities] onto their environments.

One of the exceptions is HPE, because of what they have done by selling most of their legacy software business to Micro Focus. They have actually come from a cloud-native starting place for the tooling to do this. They have taken a somewhat differentiated approach to the other folks in the market who have really been assembling things through acquisition.
Learn More About
HPE OneSphere
The other folks in the market are the traditional systems integrators. It’s in their DNA to be working with multiple services. That may be Accenture, Capgemini, and DXC, or any of these folks. But, quite frankly, those organizations are only interested in working with the Global 1000 or 2000 companies. And as we know, the conversion to cloud is happening across all industries. There is a tremendous opportunity for folks to work with all kinds of companies as they are moving to the cloud.

Gardner: Again, going back historically in IT, we have recognized that having multiple management points solves only part of the problem. Organizations quickly tend to want to consolidate their management and have a single view, in this case, of not just the data center or private cloud, but all public clouds, so hybrid and multicloud.

It seems to me that having a single point across all of the hybrid IT continuum is going to be an essential attribute. Is that something you are seeing in the market as well?

More is better

Fellows: Yes, it is, although, I don’t think there is any one company or one approach that has a leadership position yet. That makes this point in time more interesting but somewhat risky for end users. That is why our counsel to enterprises is to work with vendors who can offer a full and a rich set of services.

The more things that you have, the more you are going to be able to undertake and navigate this journey to the cloud -- and then support the digital transformation on top.

Working with vendors that have loosely-coupled approaches allows you to take advantage of a core set of native services -- but then also use your own tools or third-party services via application programming interfaces (APIs). It may be a platform approach or it may be a software-as-a-service (SaaS) approach.

At this point, I don’t think any of the IT vendor firms have sufficiently joined up these approaches to be able to operate across the hybrid IT environment. But it seems to me that HPE is doing a good job here in terms of bringing, or joining, these things together.

On one side of the HPE hash is the mature, well-understood, HPE OneView environment, which is now being purposed to provide a software-defined way of provisioning infrastructure. The other piece is the HPE OneSphere environment, which provides API-driven management for applications, services, workloads, and the whole workspace and developer piece as well.

So, one is coming top-down and the other one bottom-up. Once those things become integrated, they will offer a pretty rich way for organizations to manage their hybrid IT environments.
The HPE OneSphere environment provides API-driven management for applications, services, workloads, and the developer piece as well.

Now, if you are also using HPE’s Synergy composable infrastructure, then you are going to get an exponential benefit from using those other tools. Also, the Cloud Cruiser cost reporting capability is now embedded into HPE OneSphere. And HPE has a leading position in this new kind of hardware consumption model -- for using new hardware services payment models -- via its HPE GreenLake Hybrid Cloud offering.

So, it seems to me that there is enough here to appeal to many interests within an organization, but crucially it will allow IT to retain control at the same time.

Now, HPE is not unique. It seems to me that all of the vendors are working to head in this general direction. But the HPE offering looks like it's coming together pretty well.

Gardner: So, a great deal of maturity left to go. Nonetheless, the cloud-governance opportunity appears big enough to drive a truck through. If you can bring together an ecosystem and a platform approach that appeals to those MSPs, to systems integrators, works well in the large global 2000, but also has a direct role toward the small and medium businesses – that’s a very big market opportunity.

I think businesses and IT operators should begin to avail themselves of learning more about this market, because there is so much to gain when you do it well. As you say, the competition is going to push the vendors forward, so a huge opportunity is brewing out there.

William, what should IT organizations be doing now to get ready for what the vendors and ecosystems bring out around cloud management and optimization? What should you be doing now to get in a position where you can take advantage of what the marketplace is going to provide?

Get your cloud house in order

Fellows: First and foremost, organizations now need to be moving toward a position of cloud-readiness. And what I mean is understanding to what extent applications and workloads are suitable for moving to the cloud. Next comes undertaking the architecting, refactoring, and modernization. That will allow them to move into the cloud without the complexity, cost, and disruption of the first-generation lift-and-shift approaches.

In other words, get your own house in order, so to speak. Prepare for the move to the cloud. It will become apparent that some applications and workloads are suitable for some kind of services deployment, maybe a public cloud. Other types of apps and workloads are going to be more suited to other kinds of environments, maybe a hosted private environment.

You are then also going to have applications that you want to take advantage of on the edge, for Internet of things (IoT), and so on. You are going to want a different set of services for that as well.

The challenge is going to be working with providers that can help you with all of that. One thing we do know is that most organizations are accessing cloud services via partners. In fact, in AWS’s case, 90 percent of Fortune 100 companies that are its customers are accessing its services via a partner.

And this comes back to the role and the rise of the MSP who can deliver value-add by enabling an organization to work and use different kinds of cloud services to meet different needs -- and to manage those as a logical resource.

That’s the way I think organizations need to approach this whole cloud piece. Although we have been doing this for a while now -- AWS has had cloud services for 11 years -- the majority of the opportunity is still ahead of us. Up until now, it has really still only been the early adopters who have converted to cloud. That’s why there is such a land grab underway at present to be able to capture the majority of the opportunity.
Learn More About
HPE OneSphere
Gardner: I’m sure we can go on for another 30 minutes on just one more aspect to this, which is the skills part. It appears to me there will be a huge need for the required skills for managing cloud adoption across the economics and procurement best practices -- as well as the technical side. So perhaps a whole new class of people are needed within companies who have backgrounds in economics, procurement, IT optimization and management methods, as well as deeply understanding cloud ecosystem.

Develop your skills

Fellows: You are right. 451’s Voice of the Enterprise data shows that the key barrier to accelerating adoption is not technology -- but a skills shortage. Indeed, that’s across operations, architecture, and security.

Again, I think this is another opportunity for the MSPs, to help upskill a customer’s own organization in these areas. That will be a driver for success, because, of course, when we talk about being in the cloud, we are not talking so much about the technology -- we are talking about the operating model. That really is the key here.

That operating model is consumption-based, services-driven, and with a retail-model’s discipline. It’s more than CAPEX to OPEX. It’s more than hardwired to being agile -- it’s all of those things, and that really means the transformation of enterprises and organizations. It’s really the most difficult and challenging thing going on here.

Whatever an IT supplier can do to assist end-customers with that, to rotate to that new operating model, is likely to be more successful.