Wednesday, February 6, 2019

Why enterprises struggle with adopting public cloud as a culture


The next BriefingsDirect digital strategies interview examines why many businesses struggle with cloud computing adoption, and how they could improve by attaining a culture directed at cloud consumption and total productivity.

Due to inertia, a lack of skills, and even outright hostility, some enterprises are stumbling in their march to cloud use due to behavior and perception -- and not the actual technology hurdles.

We will now hear from an observer of cloud adoption patterns on why a cultural solution to adoption may be more important than any other aspect of digital business transformation

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us explore why cloud inertia can derail business advancement is Edwin Yuen, Senior Analyst for Cloud Services and Orchestration, Data Protection, and DevOps at Enterprise Strategy Group (ESG). [Note: Since this podcast was recorded on Nov. 15, 2018, Yuen has become principal product marketing manager at Amazon Web Services.] The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Edwin, why are enterprises finding themselves unready for public cloud adoption culturally?

Yuen: Culturally the issue with public cloud adoption is whether IT is prepared to bring in public cloud. I bring up the IT issue because public cloud usage is actually really high within business organizations.

Yuen
At ESG, we have found that cloud use is pretty significant -- well over 80 percent are using some sort of public cloud service. It’s very high for infrastructure- (IaaS) and platform-as-a-service (PaaS).

But the key here is, “What is the role of IT?” We see a lot of business end-users and others essentially doing Shadow IT – of going around IT if they feel like their needs are not met. That actually increases the friction between IT and the business.

It also leads to people going into the public cloud before they are ready, before there’s been a proper evaluation – from a technical, cost, or even a convenience point of view. That can potentially derail things.

But there is an opportunity for IT to reset the boundaries and meet the needs of the end users, of thoughtfully getting into the public cloud.

Gardner: We may be at the point of having too much of a good thing. Even if people are doing great things with cloud computing inside of an organization, unless it’s strategically oriented to process, fulfillment, and a business outcome, then the benefits be can lost. 

Plan before you go to public cloud

Yuen: When line of business (LOB) or other groups are not working with core IT in going to the public cloud, they get some advantages from it -- but they are not getting the full advantage. It’s like going from an old piece of smartphone technology, at 7 or 8 years old, and then only going up to the fifth or sixth best phone. It’s a significant upgrade, but it’s not the optimal way to do it. They’re not getting the full benefits.

The question is, “Are you getting the most out of it, and is that thoughtfulness there?” You want to maximize the capabilities and advantages you get from public cloud and minimize the inconvenience and cost. Planning is absolutely critical for that -- and it involves core IT.

So how do you bring about a cultural shift that says, “Yes, we are going into the public cloud. We are not trying to stop you. We are not being overly negative. But what we are trying to do is optimize it for everybody across the board, so that we as a company can get the most out of it because there are so many benefits -- not just incremental benefits that you get from immediately jumping in.”
Learn About Comparing
 App Deployment

Gardner: IT needs to take a different role, of putting in guardrails in terms of cloud services consumption, compliance, and security. It seems to me that procurement is also a mature function in most enterprises. They may also want to step in.

When you have people buying goods individually on a credit card, you don’t necessarily take advantage of volume purchasing, or you don’t recognize that you can buy things in bulk and distribute them and get better deals or terms. Yet procurement groups are very good at that.

Is there an opportunity to better conduct cloud consumption like with procuring any other business service, with all the checks, balances, and best practices?

Cut cloud costs, buy in bulk

Yuen: Absolutely, and that’s an excellent point. I think people will often leave out procurement, auditing, acquisitions or whatever department that there is for cloud. It becomes critically important organizationally, especially from the end-user point of view.

From the organizational point of view, you can lose economies of scale. A lot of the cloud providers will provide those economies of scales via an enterprise agreement. That allows for purchasing power to be taken.

Yet if individuals go out and leave procurement behind, it’s like shopping at a retailer for groceries without ever checking for sales or coupons. Buying in volume is just a smarter way to centralize the entire organization so you can leverage it. It becomes a better cost for the line of business, obviously. Cloud is really a consumption-based model, so planning needs to be there.

We’ve talked to a lot of organizations. As they jump into cloud, they expect cost savings, but sometimes they get an increase in cost because once you have that consumption model available, people just go ahead and consume.

We've talked to a lot of organizations. As they jump into cloud, they expect cost savings, but sometimes they get an increase in cost because once you have that consumption model available, people just go ahead and consume.
And what that generates is a variability in the cost of consumption; a variability in cost of cloud. A lot of companies very quickly realize that they don’t have variable budgets -- they have fixed budgets. So they need to think about how they use cloud and the consumption cost for an entire year. You can’t just go about your work and have some flexibility but then find that you are out of budget when you get to the second half of the third or fourth quarter of the fiscal year.

You can’t budget on open-ended consumption. It requires a balance across the organization, where you have flexibility enough to be active -- and go into the cloud. But you also need to understand what the costs are throughout the entire lifecycle, especially if you have fixed budgets.

Gardner: If you get to the fourth quarter and you run out of funds, you can’t exactly turn off the mission-critical applications either. You have to find a way to pay for that, and that can wreak havoc, particularly in a public company.

In the public sector, in particular, they are very much geared to a CAPEX budget. In cities, states, and the federal government, they have to bond large purchases, and do that in advance. So, there is dissonance culturally in terms of the economics around cloud and major buying patterns.

Yuen: We absolutely see that. There was an assumption by many that you would simply want to go to an OPEX model and leave the CAPEX model behind. Realistically, what you’re doing is leaving the CAPEX model behind from a consumption point of view -- but you’re not leaving it behind from a budgeting and a planning point of view.

The economic reality is that it just doesn’t work that way. People need to be more flexible, and that’s exactly what the providers have been adapting to. But the providers will also have to allow you to consume in a way that allows you to lock down costs. But that only occurs when the organization works together in terms of its total requirements as opposed to just simply going out and using it.

The key for organizational change is to drive a culture where you have flexibility and agility but work within the culture to know what you want to do ahead of time. Then the organization can do the proper planning to be fiscally responsible, and fiscally execute on the operational plan.

Gardner: Going to a cloud model really does force behavioral changes at the holistic business level. IT needs to think more like procurement. Procurement needs to get more technical and savvier about how to acquire and use cloud computing services. This gets complex. There are literally thousands, if not tens of thousands, of SKUs, different types of services, you could acquire from any of the major public cloud providers.

Then, of course, the LOB people need to be thinking differently about how they use and consume services. They need to think about whether they should coordinate with developers for customization or not. It’s rather complex.

So let’s identify where the cultural divide is. Is it between IT of the old caliber and the new version of IT? Is it a divide between the line of business people and IT? Between development and operations? All the above? How serious is this cultural divide?

Holistic communication plans 

Yuen: It really is all of the above, and in varying areas. What we are seeing is that the traditional roles within an organization have really been monolithic roles. End-users were consumers, the central IT was a provider, and finances were handled by acquisitions and the administration. Now, what we are seeing, is that everybody needs to work together, and to have a much more holistic plan. There needs to be a new level of communication between those groups, and more of a give-and-take.

It’s similar to the running of a restaurant. In the past, we had a diner, that was the end user, and they said: “I want this food.” The chef says, “I am going to cook this food.” The management says, “This food costs this much.” They never really talked to each other.

They would do some back-and-forth dialog, but there wasn’t a holistic understanding of the actual need. And, to be perfectly honest, not everybody was totally satisfied. The diners were not totally satisfied with the meal because it’s wasn’t made the way they wanted. They weren’t going to pay for something they didn’t actually want. Finance fixed the menu prices, but they would have liked to charge a little bit more. The chef really wanted to cook a little bit differently or have the ability to shift things around.
Read Why IT Operations
 And Developers
The key for improved cloud adoption is opening the lines of communication, bridging the divides, and gaining new levels of understanding. As in the restaurant analogy, the chef says, “Well, I can add these ingredients, but it will change the flavor and it might increase the cost.” And then the finance people say, “Well, if we make better food, then more people will eat it.” Or, “If we lower prices, we will get more economies of scale.” Or, “If we raise prices we will reduce volume of diners down.” It’s all about that balance -- and it’s an open discussion among and between those three parts of the organization.

This is the digital transformation we are seeing across the board. It’s about IT being more flexible, listening to the needs of the end users, and being willing to be agile in providing services. In exchange, the end users come to IT first, understand where the cloud use is going, and can IT be responsive. IT knows better what the users want. It becomes not just that they want solutions faster, but by how much. They can negotiate based on actual requirements.

And then they all work with operations and other teams and say, “Hey, can we get those resources? Should we put them on-premises or off-premises? Should we purchase it? Should we use CAPEX, or should we use OPEX?” It becomes about changing the organization’s communication across the board and having the ability to look at it from more than just one point of view. And, honestly, most organizations really need help in that.

It’s not just scheduling a meeting and sitting at a table. Most organizations are looking for solutions and software. They need to bridge the gap, provide a translation of where management of software can come together and say, “Hey, here are the costs related to the capacity that we need.” So everyone sits together and says, “Okay, well, if we need more capacity and the cost turns into this and the capacity turns into that, you can do the analysis. You can determine if it’s better in the cloud, or better on-premises. But it’s about more than just bringing people together and communicating. It has to provide them the information they need in order to have a similar discussion and gain common ground to work together.

Gardner: Edwin, tell us about yourself and ESG.

Pathway to the cloud 

Yuen: Enterprise Strategy Group is a research and analyst firm. We do a lot of work with both vendors and customers, covering a wide range of topics. And we do custom research and syndicated research. And that backs a lot of the findings that we have when we have discussions about where the market is going.

I cover cloud orchestration and services, data protection, and DevOps, which is really the whole spectrum of how people manage resources and how to get the most out of the cloud -- the great optimization of all of that.

As background, I have worked at Hewlett Packard Enterprise (HPE), Microsoft, and at several startups. I have seen this whole process come together for the growth of the cloud, and I have seen different changes -- when we had virtualization, when we had great desktops, and seeing how IT and end-users have had to change.

This is a really exciting time as we get public cloud going; more than just an idea. It’s like when we first had the Internet. We are not just talking about cloud, we are talking what we are doing in the cloud and how the cloud helps us. And that’s the sign of the maturity of the market, but also the sign of what we need to do in order to change, in order to take the best out of it.

This is a really exciting time as we get public cloud going. It's more than an idea. We are talking about how the cloud helps us. That's a sign of maturity and what we need to do to take the best out of it.
Gardner: Your title is even an indicator that you have to rethink things -- not just in slices or categories or buckets -- but in the holistic sense. Your long, but very apropos, job title really shows what we have been talking about that companies need to be thinking differently.

So that gets me to the issue about skills. So maybe the typical IT person -- and I don’t want to get into too much of a generalization or even stereotyping -- seems to be content to find a requirement set, beaver along in their cubicle, maybe not be too extroverted in terms of their skills or temperament, and really get the job done. It is detail-oriented, it is highly focused.

But in order to accomplish what companies need to do now -- to cross-pollinate, break down boundaries, think outside of the box -- that requires different skills, not just technical but business; not just business but extroverted or organizationally aggressive in terms of opening up channels with other groups inside the company, even helping people get out of their comfort zone.

So what do you think is the next step when it comes to finding the needed talent and skills to create this new digitally transformed business environment?

Curious IT benefits business

Yuen: In order to find that skill set, you need to expand your boundaries in two ways.

One is the ability to take your natural interest in learning and expand it. I think a lot of us, especially in the IT industry, have been pushed to specialize in certain things and get certifications, and you need to get as deep as possible. We have closed our eyes to having to learn about other technologies or other items.

Most technical people, in general, are actually fairly inquisitive. We have the latest iPhone or Android. We are generally interested. We want to know the market because we want to make the best decisions for ourselves.

We need to apply that generally within our business lives and in our jobs in terms of going beyond IT. We need to understand the different technologies out there. We don’t have to be masters of them, we just need to understand them. If we need to do specialization, we go ahead. But we need to take our natural curiosity -- especially in our private lives -- and expand that into our work lives and get interested in other areas.

The second area is accepting that you don’t have to be the expert in everything. I think that’s another skill that a lot in business should have. We don’t want to speak up or learn if we fear we can’t be the best or we might get corrected if we are wrong.

But we really need to go ahead and learn those new areas that we are not expert in. We may never be experts, but we want to get that secondary perspective. We want to understand where finance is coming from in terms of budgetary realities. We need to learn about how they do the budget, what the budget is, and what influences the costs.

If we want to understand the end users’ needs, we need to learn more about what their requirements are, how an application affects them, and how it affects their daily lives. So that when we go to the table and they say, “I need this,” you have that base understanding and know their role.

By having greater knowledge and expanding it, that allows you to go ahead and learn a lot more and as you expand from that area. You will discover areas that you might become interested in or that your company needs. That’s where you go ahead, double-down, and take your existing learning capabilities and go really, really deep.

A good example is if I have a traditional IT infrastructure. Maybe I have learned virtual machines, but now I am faced with such things as cloud virtual machines, containers, and Kubernetes, and with serverless. You may not be sure in what direction to go, and with analysis paralysis -- you may not do anything.

What you should do is learn about each of those, how it relates, and what your skills are. If one of those technologies booms suddenly, or it becomes an important point, then you can very quickly make a pivot and learn it -- as opposed to just isolating yourself.

So, the ability to learn and expand the skills gap creates opportunities for everybody.

Gardner: Well, we are not operating in a complete vacuum. The older infrastructure vendors are looking to compete and remain viable in the new cloud era. They are trying to bring out solutions that automate. And so are the cloud vendors.

What are you seeing from cloud providers and the vendors as they try to ameliorate these issues? Will new tools, capabilities, and automation help gain that holistic, strategic focus on the people and the process?

Cloud coordinators needed 

Yuen: The providers and vendors are delivering the tools and interfaces to do what we call automation and orchestration. Sometimes those two terms get mixed together, but generally I see them as separate. Automation is taking an existing task, or a series of tasks or process, and making it into a single, one-button-click type of thing. The best way I would describe it is almost like an Excel macro. You have steps 1, 2, 3, and 4, I am going to go ahead and do 1, 2, 3 and 4 as a script with a single button.

But orchestration is taking those processes and coordinating them. What if I need to have decision points in coordination? What if I need to decide when to run this and when not to run that? The cloud providers are delivering the APIs, entry points, and the data feedback so you have the process information. You can only automate based on the information coming in. We are not blindly saying we are going to do one, two and three or A, B and C; we are going to react based on the situation.

So we really must rely on the cloud providers to deliver that level of information and the APIs to execute on what we want to do.

And, meanwhile, the vendors are creating the ability to bring all of those tools together as an information center, or what we traditionally have called a monitoring tool. But it’s really cloud management where we see across all of the different clouds. We can see all of the workloads and requirements. Then we can build out the automation and orchestration around that.

The vendors are creating the ability to bring all of those tools together as an information center, what we traditionaly called a monitoring tool. But it's really cloud management across all of the different clouds.
Some people are concerned that if we build a lot of automation and orchestration, that they will automate themselves out of a job. But realistically what we have seen is with cloud and with orchestration is that IT is getting more complex, not less complex. Different environments, different terminologies, different way to automate, the complexities of integrating more than just the steps that IT has – this has created a whole new area for IT professionals to get into. Instead of deciding what button to press and doing the task, they will automate the tests. Then we are left to focus on determining the proper orchestration, of coordinating amongst all the other areas.

So as the management has gone up a level, the skills and the capabilities for the operators are also going to go up.

Gardner: It seems to me that this is a unique time in the long history of IT. We can now apply those management principles and tools not just to multicloud or public cloud, but across private cloud, legacy, bare-metal, virtualization, managed service providers, and SaaS applications. Do you share my optimism that if you can, in effect, adjust to cloud heterogeneity that you can encompass all of IT heterogeneity and get comprehensive, data-driven insights and management for your entire IT apparatus regardless of where it resides, how it operates, and how it's even paid for?

Seeing through the clouds

Yuen: Absolutely! I mean that’s where we are going to end up. It’s an inverse of the mindset that we currently have in IT, which is we maintain a specific type of infrastructure, we optimize and modify it, and then the end result is it’s going to impact the application in a positive way, we hope.

What we are doing now is we are inverting that thinking. We are managing applications and the applications help deliver the proper experience. That’s what we are monitoring the most, and it doesn’t matter what the underlying infrastructure or the systems are. It’s not that we don’t care, we just don’t care necessarily what the systems are.
How to Put the Right People
 In the Right Roles
Once we care about the application, then we look at the underlying infrastructure, and then we optimize that. And that infrastructure could be in the public cloud, across multiple providers, it could be in a private cloud, or a traditional backend and large mainframe systems.

It’s not that we don’t care about those backend systems. In fact, we care just as much as we did before – it’s just that we don’t have to have that alignment. Our orientation isn’t system-based or application-based. Now, there potentially could be anything -- and the vendors with systems management software, they are extending that out.

So it doesn’t matter if it’s a VMware system, or a bare metal system, or a public cloud. We are just managing the end-result relative to how those systems operate. We are going to let the tools go ahead and make sure they execute.

Our ability to understand and monitor is going to be critical. It’s going to allow us to extend out and manage across all the different environments effectively. But most importantly, it’s all affecting the application at the top. So you’re becoming a purveyor and providing better skills to the end-users and to finance.

Gardner: While you’re getting a better view application-by-application, you’re also getting a better opportunity to do data analysis across all of these different deployments. You can find ways of corralling that data and its metadata and move the workloads into the proper storage environment that best suits your task at the moment under the best economics of the moment.

Not only is there an application workload benefit, but you can argue that there is an opportunity to finally get a comprehensive view of all of the IT data and then manage that data into the right view -- whether it’s a system of record benefit, application support benefit or advanced analytics, and even out to the edge.

Do you share my view that the applications revolution you are describing also is impacting how data is going to be managed, corralled, and utilized?

Data-driven decision making

Yuen: It is, and that data viewpoint is applicable in many ways. It’s one of the reasons why data protection and analysis of that data becomes incredibly important. From the positive side, we are going to get a wealth of data that we need in order to do the optimizations.

If I want to know the best location for my apps, I need all the information to understand that. Now that we are getting that data in, it can be passed to machine learning (ML) or artificial intelligence (AI) systems that can make decisions for us going forward. Once we train the models, they can be self-learning, self-healing, and self-operating. That’s going to relieve a lot of work from us.

Data also impacts the end-users. People are taking in data, and they understand that they can use it for secondary users. It can be used for development, it can be used for sales. I can make copies of that data, so I don’t want to touch the production data all the time. There is so much insight I can provide to the end users. In fact, the explosion of data is a leading cause of increased IT complexity.

We want to maximize the information that we get out of all that data, to maximize the information the end-users are getting out of it, and also leverage our tools to minimize the negative impact it has for management.

Gardner: What should enterprises be doing differently in order to recognize the opportunity, but not fall to the wayside in terms of these culture and adoption issues?

Come together, right now, over cloud

Yuen: The number one thing is to start talking and developing a measured, sustainable approach to going into the cloud. Come together and have that discussion, and don’t be afraid to have that discussion. Whether you’re ready for cloud or you’ve already gone in and need to rein it back in. No matter what you need to do, you always have that centralized approach because that approach is not going to be a one-time thing. You don’t make a cloud plan and then not revisit it for 20 years -- you live it. It’s an ongoing, living, breathing thing -- and you’re always going to be making adjustments.

But bring the team together, develop a plan, build an approach to cloud that you’re going to be living with. Consider how you want to make decisions and bring that together with how you want to interact with each other. That plan is going to help build the communication plan and build the organization to help make that cultural shift.

Companies honestly need to do an assessment of what they have. It’s surprising that a lot of companies just don’t know how much cloud they are using. They don’t know where it’s going. And even if it’s not in the cloud yet, they don’t know what they need.

A lot of the work is understanding what you have. Once you build out the plan of what you want to do, you essentially get your house in order, understand what you have, then you know where you want to go, where you are, and then you can begin that journey.

The biggest problem we have right now is companies that try and do both at the same time. They move forward without planning it out. They may potentially move forward without understanding what they already have, and that leads to inefficiencies and cultural conflicts, and the organizational skills gaps issues that we talked about.
Learn About Smooth Transitions to Multicloud
Gain Insights to

So again, lay out a plan and understand what you have, those are the first two steps. Then look for solutions to help you understand and capture the information about the resources you already have and how you are using them. By pulling those things together, you can really go forward and get the best use out of cloud.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Monday, January 28, 2019

Who, if anyone, is in charge of multi-cloud business optimization?

Credit: Wikimedia Commons
The next BriefingsDirect composable cloud strategies interview explores how changes in business organization and culture demand a new approach to leadership over such functions as hybrid and multi-cloud procurement and optimization.

We’ll now hear from an IT industry analyst about the forces reshaping the consumption of hybrid cloud services and why the model around procurement must be accompanied by an updated organizational approach -- perhaps even a new office or category of officer in the business category.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to help explore who -- or what -- should be in charge of spurring effective change in how companies acquire, use, and refine their new breeds of IT is John Abbott, Vice President of Infrastructure and Co-Founder of The 451 Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What has changed about the way that IT is being consumed in companies? Is there some gulf between how IT was acquired and the way it is being acquired now?

Abbott: I think there is, and it’s because of the rate of technology change. The whole cloud model is up over traditional IT and is being modeled in a way that we probably didn’t foresee just 10 years ago. So, CAPEX to OPEX, operational agility, complexity, and costs have all been big factors.

Abbott
But now, it’s not just cloud, it's multi-cloud as well. People are beginning to say, “We can’t rely on one cloud if we are responsible citizens and want to keep our IT up and running.” There may be other reasons for going to multi-cloud as well, such as cost and suitability for particular applications. So that’s added further complexity to the cloud model.

Also, on-premises deployments continue to remain a critical function. You can’t just get rid of your existing infrastructure investments that you have made over many, many years. So, all of that has upended everything. The cloud model is basically simple, but it's getting more complex to implement as we speak.

Gardner: Not surprisingly, costs have run away from organizations that haven’t been able to be on top of a complex mixture of IT infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS). So, this is becoming an economic imperative. It seems to me that if you don't control this, your runaway costs will start to control you.

Abbott: Yes. You need to look at the cloud models of consumption, because that really is the way of the future. Cloud models can significantly reduce cost, but only if you control it. Instant sizes, time slices, time increments, and things like that all have a huge effect on the total cost of cloud services.

Also, if you have multiple people in an organization ordering particular services from their credit cards, that gets out of control as well. So you have to gain control over your spending on cloud. And with services complexity -- I think Amazon Web Services (AWS) alone has hundreds of price points -- things are really hard to keep track of.
Gain New Insights Into
Managing the Next Wave
Of IT Disruption
Gardner: When we are thinking about who -- or what -- has the chops to know enough about the technology, understand the economic implications, be in a position to forecast cost, budget appropriately, and work with the powers that be who are in charge of enterprise financial functions -- that's not your typical IT director or administrator.

IT Admin role evolves in cloud 

Abbott: No. The new generation of generalist IT administrators – the people who grew up with virtualization -- don't necessarily look at the specifics of a storage platform, or compute platform, or a networking service. They look at it on a much higher level, and those virtualization admins are the ones I see as probably being the key to all of this.

But they need tools that can help them gain command of this. They need, effectively, a single pane of glass -- or at least a single control point -- for these multiple services, both on-premises and in the cloud.

Also, as the data centers become more distributed, going toward the edge, that adds even further complexity. The admins will need new tools to do all of that, even if they don't need to know the specifics of every platform.


Gardner: I have been interested and intrigued by what Hewlett Packard Enterprise (HPE) has been doing with such products as HPE OneSphere, which, to your point, provides more tools, visibility, automation, and composability around infrastructure, cloud, and multi-cloud.

But then, I wonder, who actually will best exploit these tools? Who is the target consumer, either as an individual or a group, in a large enterprise? Or is this person or group yet to be determined?

Abbott: I think they are evolving. There are skill shortages, obviously, for managing specialist equipment, and organizations can’t replace some of those older admin types. So, they are building up a new level of expertise that is more generalist. It’s those newer people coming up, who are used to the mobile world, who are used to consumer products a bit more, that we will see taking over.

We are going toward everything-as-a-service and cloud consumption models. People have greater expectations on what they can get out of a system as well.

Also, you want the right resources to be applied to your application. The best, most cost-effective resources; it might be in the cloud, it might be a particular cloud service from AWS or from Microsoft Azure or from Google Cloud Platform, or it might be a specific in-house platform that you have. No one is likely to have of all that specific knowledge in the future, so it needs to be automated.

We are going toward everything-as-a-service and cloud consumption models. People have greater expectations on what they can get out of a system as well.
We are looking at the developers and the systems architects to pull that together with the help of new automation tools, management consoles, and control plans, such as HPE OneSphere and HPE OneView. That will pull it together so that the admin people don’t need to worry so much. A lot of it will be automated.

Gardner: Are we getting to a point where we will look for an outsourced approach to overall cloud operations, the new IT procurement function? Would a systems integrator, or even a vendor in a neutral position, be able to assert themselves on best making these decisions? What do you think comes next when it comes to companies that can't quite pull this off by themselves?

People and AI partnership prowess

Abbott: The role of partners is very important. A lot of the vertically oriented systems integrators and value-added resellers, as we used to call them, with specific application expertise are probably the people in the best position.

We saw recently at HPE Discover the announced acquisition of BlueData, which allows you to configure in your infrastructure a particular pool for things like big data and analytics applications. And that’s sort of application-led.

The experts in data analysis and in artificial intelligence (AI), the data scientists coming up, are the people that will drive this. And they need partners with expertise in vertical sectors to help them pull it together.

Gardner: In the past when there has been a skills vacuum, not only have we seen a systems integration or a professional services role step up, we have also seen technology try to rise to the occasion and solve complexity.

Where do you think the concept of AIOps, or using AI and machine learning (ML) to help better identify IT inefficiencies, will fit in? Will it help make predictions or recommendations as to how you run your IT?
Gain New Insights Into
Managing the Next Wave
Of IT Disruption
Abbott: There is a huge potential there. I don’t think we have actually seen that really play out yet. But IT tools are in a great position to gather a huge amount of data from sensors and from usage data, logs, and everything like that and pull that together, see what the patterns are, and recommend and optimize for that in the future.

I have seen some startups doing system tuning, for example. Experts who optimize the performance of a server usually have a particular area of expertise, and they can't really go beyond that because it's huge in itself. There are around 100 “knobs” on a server that you can tweak to up the speed. I think you can only do that in an automated fashion now. And we have seen some startups use AI modeling, for instance, to pull those things together. That will certainly be very important in the future.

https://451research.com

Gardner: It seems to me a case of the cobbler’s children having no shoes. The IT department doesn’t seem to be on the forefront of using big data to solve their problems.

Abbott: I know. It's really surprising because they are the people best able to do that. But we are seeing some AI coming together. Again, at the recent HPE Discover conference, HPE InfoSight made news as a tool that’s starting to do that analysis more. It came from the Nimble acquisition and began as a storage-specific product. Now it’s broadening out, and it seems they are going to be using it quite a lot in the future.

Gardner: Perhaps we have been looking for a new officer or office of leadership to solve multi-cloud IT complexity, but maybe it's going to be a case of the machines running the machines.

Faith in future automation 

Abbott: A lot of automation will be happening in the future, but that takes trust. We have seen AI waves [of interest] over the years, of course, but the new wave of AI still has a trust issue. It takes a bit of faith for users to hand over control.

But as we have talked about, with multi-cloud, the edge, and things like microservices and containers -- where you split up applications into smaller parts -- all of that adds to the complexity and requires a higher level of automation that we haven’t really quite got to yet but are going toward.

Gardner: What recommendations can we conjure for enterprises today to start them on the right path? I’m thinking about the economics of IT consumption, perhaps getting more of a level playing field or a common denominator in terms of how one acquires an operating basis using different finance models. We have heard about the use of these plans by HPE, HPE GreenLake Flex Capacity, for example.

I wrote a research paper on essentials of edge-to-cloud and hybrid management. We recommend a proactive cloud strategy. Think out where to put your workloads and how to distribute them across different clouds.
What steps would you recommend that organizations take to at least get them on the path toward finding a better way to procure, run, and optimize their IT?

Abbott: I actually recently wrote a research paper for HPE on the eight essentials of edge-to-cloud and hybrid IT management. The first thing we recommended was a proactive cloud strategy. Think out your cloud strategy, of where to put your workloads and how to distribute them around to different clouds, if that’s what you think is necessary.

Then modernize your existing technology. Try and use automation tools on that traditional stuff and simplify it with hyperconverged and/or composable infrastructure so that you have more flexibility about your resources.

Make the internal stuff more like a cloud. Take out some of that complexity. It's has to be quick to implement. You can’t spend six months doing this, or something like that.
Gain New Insights Into
Managing the Next Wave
Of IT Disruption
Some of these tools we are seeing, like HPE OneView and HPE OneSphere, for example, are a better bet than some of the traditional huge management frameworks that we used to struggle with.

Make sure it's future-proof. You have to be able to use operating system and virtualization advances [like containers] that we are used to now, as well as public cloud and open APIs. This helps accelerate things that are coming into the systems infrastructure space.

Then strive for everything-as-a-service, so use cloud consumption models. You want analytics, as we said earlier, to help understand what's going on and where you can best distribute workloads -- from the cloud to the edge or on-premises, because it's a hybrid world and that’s what we really need.

And then make sure you can control your spending and utilization of those services, because otherwise they will get out of control and you won't save any money at all. Lastly, be ready to extend your control beyond the data center to the edge as things get more distributed. A lot of the computing will increasingly happen close to the edge.

Computing close to the edge

Abbott: Yes. That's has to be something you start working on now. If you have software-defined infrastructure, that's going to be easier to distribute than if you are still wedded to particular systems, as the old, traditional model was.

Gardner: We have talked about what companies should do. What about what they shouldn't do? Do you just turn off the spigot and say no more cloud services until you get control?

It seems to me that that would stifle innovation, and developers would be particularly angry or put off by that. Is there a way of finding a balance between creative innovation that uses cloud services, but within the confines of an economic and governance model that provides oversight, cost controls, and security and risk controls?

Abbott: The best way is to use some of these new tools as bridging tools. So, with hybrid management tools, you can keep your existing mission-critical applications running and make sure that they aren't disrupted. Then, gradually you can move over the bits that make sense onto the newer models of cloud and distributed edge.
Gain New Insights Into
Managing the Next Wave
Of IT Disruption
You don't do it in one big bang. You don’t lift-and-shift from one to another, or react, as some people have, to reverse back from cloud if it has not worked out. It's about keeping both worlds going in a controlled way. You must make sure you measure what you are doing, and you know what the consequences are, so it doesn't get out of control.


Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.


You may also be interested in:

Wednesday, January 23, 2019

A discussion with IT analyst Martin Hingley on the culmination of 30 years of IT management maturity

The next BriefingsDirect hybrid IT strategies interview explores how new maturity in the management and composition of multiple facets of IT -- from cloud to bare-metal, from serverless to legacy systems -- amount to a culmination of 30 years of IT evolution.

We’ll hear now from an IT industry analyst about why – for perhaps the first time -- we’re able to gain an uber-view over all of IT operations. And we’ll explore how increased automation over complexity such as hybrid and multicloud deployments sets the stage for artificial intelligence (AI) in IT operations, or AIOps.

It may mean finally mastering IT heterogeneity and giving businesses the means to truly manage how they govern and sustain all of their digital business assets.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to help us define the new state of total IT management is Martin Hingley, President and Market Analyst at ITCandor Limited, based in Oxford, UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Looking back at IT operations, it seems that we have added a lot of disparate and hard-to-manage systems – separately and in combination -- over the past 30 years. Now, with infrastructure delivered as services and via hybrid deployment models, we might need to actually conquer the IT heterogeneity complexity beast – or at least master it, if not completely slay it.

Do you agree that we’re entering a new era in the evolution of IT operations and approaching the need to solve management comprehensively, over all of IT?

Hingley: I have been an IT industry analyst for 35 years, and it’s always been the same. Each generation of systems comes in and takes over from the last, which has always left operators with the problem of trying to manage the new with the old.

Hingley
A big shift was the client/server model in the late 1980s and early 1990s, with the influx of PC servers and the wonderful joy of having all these new systems. The problem was that you couldn’t manage them under the same regime. And we have seen a continuous development of that problem over time.

It’s also a different problem depending on the size of organization. Small- to medium-sized (SMB) companies can at least get by with bundled systems that work fine and use Microsoft operating systems. But the larger organizations generate a huge mixture of resources.

Cloud hasn’t helped. Cloud is very different from your internal IT stuff -- the way you program it, the way you develop applications. It has a wonderful cost proposition; at least initially. It has a scalability proposition. But now, of course, these companies have to deal with all of this [heterogeneity].

Now, it would be wonderful if we get to a place where we can look at all of these resources. A starting point is to think about things as a service catalog, at the center of your corporate apps. And people are beginning that as a theory, even if it doesn’t sit in everybody’s brain.

So, you start to be able to compose all of this stuff. I like what Hewlett Packard Enterprise (HPE) is doing [with composable infrastructure]. … We are now getting to the point where you can do it, if you are clever. Some people will, but it’s a difficult, complex subject.

Gardner: The idea of everything-as-a-service gives you the opportunity to bring in new tools. Because organizations are trying to transform themselves digitally -- and the cloud has forced them to think about operations and development in tandem -- they must identify the most efficient mix of cloud and on-premises deployments.

They also have to adjust to a lack of skills by automating and trying to boil out the complexity. So, as you say, it’s difficult.

But if 25 percent of companies master this, doesn’t that put them in a position of being dominant? Don’t they gain an advantage over the people who don’t?

Hingley: Yes, but my warning from history is this. With mainframes, we thought we had it all sorted out. We didn’t. We soon had client/server, and then mini-computers with those UNIX systems, all with their own virtualizations and all that wonderful stuff. You could isolate the data in one partition from application data from a different application. We had all of that, and then along comes the x86 server.
How to Remove Complexity
From Multi-cloud
And Hybrid IT

It’s an architectural issue rather than a technology issue. Now we have cloud, which is very different from the on-premises stuff. My warning is let’s not try and lock things down with technology. Let’s think about it as architecture. If we can do that, maybe we can accommodate neuromorphic and photonic and quantum computing within this regime in the future. Remember, the people who really thought they had it worked out in previous generations found out that they really hadn’t. Things moved on.

Gardner: And these technology and architectural transitions have occurred more frequently and accelerated in impact, right?

Beyond the cloud, IT is life

Hingley: I have been thinking about this quite a lot. It’s a weird thing to say, but I don’t think “cloud” is a good name anymore. I mean, if you are a software company, you’d be an idiot if you didn’t make your products available as a service.

Every company in the world uses the cloud at some level. Basically there is no longer choice about whether we use a cloud. All those companies that thought they didn’t, when people actually looked, found they were using the cloud a lot in different departments across the organization. So it’s a challenge, yet things constantly change.

If you look 20 years in the future, every single physical device we use will have some level of compute built into it. I don’t think people like you and I are going to be paid lots of money for talking about IT as if it were a separate issue.

It is the world economy, it just is; so, it becomes about how well you manage everything together.

If you look 20 years in the future, every single physical device we use will have some level of compute built into it.  ... It becomes the world economy. It becomes about how well you manage everything together.
As this evolves, there will be genuinely new things … to manage this. It is possible to manage your resources in a coherent way, and to sit over the top of the heterogeneous resources and to manage them.

Gardner: A tandem trend to composability is that more-and-more data becomes available. At the edge, smart homes, smart cities, and also smarter data centers. So, we’re talking about data from every device in the data center through the network to the end devices, and back again. We can even determine how the users consume the services better and better.

We have a plethora of IT ops data that we’re only starting to mine for improving how IT manages itself. And as we gain a better trail of all of that data, we can apply machine learning (ML) capabilities, to see the trends, optimize, and become more intelligent about automation. Perhaps we let the machines run the machines. At least that’s the vision.

Do you think that this data capability has pushed us to a new point of manageability? 

Data’s exploding, now what? 

Hingley: A jetliner flying across the Atlantic creates 5TB of data; each one. And how many fly across the Atlantic every day? Basically you need techniques to pick out the valuable bits of data, and you can’t do it with people. You have to use AI and ML.

The other side is, of course, that data can be dangerous. We see with the European Union (EU) passing the General Data Protection Regulation (GDPR), saying it’s a citizens’ right within the EU to have privacy protected and data associated with them protected. So, we have all sorts of interesting things going on.

The data is exploding. People aren’t filtering it properly. And then we have potential things like autonomous cars, which are going to create massive amounts of data. Think about the security implications, somebody hacking into your system while you are doing 70 miles an hour on a motorway.

I always use the parable of the seeds. Remember that some seeds fall on fallow ground, some fall in the middle of the field. For me, data is like that. You need to work out which bits of it you need to use, you need to filter it in order to get some reasonable stuff out of it, and then you need to make sure that whatever you are doing is legal. I mean, it’s got to be fun.
How to Remove Complexity
From Multi-cloud
And Hybrid IT
Gardner: If businesses are tasked with this massive and growing data management problem, it seems to me they ought to get their IT house in order. That means across a vast heterogeneity of systems, deployments, and data types. That should happen in order to master the data equation for your lines of business applications and services.

How important is it then for AIOps -- applying AI principles to the operations of your data centers – to emerge sooner rather than later?

You can handle the truth 

Hingley: You have to do it. If you look at GDPR or Sarbanes-Oxley before that, the challenge is that you need a single version of the truth. Lots of IT organizations don’t have a single version of the truth.

If they are subpoenaed to supply every email that it has the word “Monte Carlo” in it, they couldn’t do it. There are probably 25 copies of all the emails. There’s no way of organizing it. So data governance is hugely important, it’s not nice to have, it’s essential to have. Under new regulations coming, and it’s not just EU, GDPR is being adopted in lots of countries.


It’s essential to get your own house in order. And there’s so much data in your organization that you are going to have to use AI and ML to be able to manage it. And it has to go into IT Ops. I don’t think it’s a choice, I don’t think many people are there yet. I think it’s nonetheless a must do.

Gardner: We’ve heard recently from HPE about the concept of a Composable Cloud, and that includes elevating software-defined networking (SDN) to a manageability benefit. This helps create a common approach to the deployment of cloud, multi-cloud, and hybrid-cloud.

It’s essential that you get your house in order. And there's so much data in your organization that you are going to have to use AI and ML to be able to manage it. And it has to go into IT Ops.
Is this the right direction to go? Should companies be thinking about a common denominator to help sort through the complexity and build a single, comprehensive approach to management of this vast heterogeneity?

Hingley: I like what HPE is doing, in particular the mixing of the different resources. You also have the HPE GreenLake model underneath, so you can pay for only what you use. By the way, I have been an analyst for 35 years, if every time the industry started talking about the need to move from CAPEX to OPEX had actually shifted, we would have been at 200 percent OPEX by now.

In the bad times, we move toward OPEX. In the good times, we secretly creep back toward CAPEX because it has financial advantages. You have to be able to mix all of these together, as HPE is doing.

Moreover, in terms of the architecture, the network fabric approach, the software-defined approach, the API connections, these are essential to move forward. You have to get beyond point products. I hope that HPE -- and maybe couple of other vendors -- will propose something that’s very useful and that helps people sort this new world out.
How to Remove Complexity
From Multi-cloud
And Hybrid IT