Friday, January 3, 2020

As hybrid IT complexity ramps up, operators look to data-driven automation tools

https://community.hpe.com/t5/Shifting-to-Software-Defined/IT-complexity-is-growing-What-can-be-done/ba-p/7038746#.Xdf_k9VKiM8

The next edition of the BriefingsDirect Voice of the Innovator podcast series examines the role and impact of automation on IT management strategies.

Growing complexity from the many moving parts in today’s IT deployments are forcing managers to seek new productivity tools. Moving away from manual processes to bring higher levels of automation to data center infrastructure has long been a priority for IT operators, but now new tools and methods are making composability and automation better options than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.


Here to help us learn more about the advancing role and impact from IT automation is Frances Guida, Manager of HPE OneView Automation and Ecosystem Product Management at Hewlett Packard Enterprise (HPE). The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the top drivers, Frances, for businesses seeking higher levels of automation and simplicity in their IT infrastructure?

Guida: It relates to what’s happening at a business level. It’s a truism that business today is moving faster than it ever has before. That puts pressure on all parts of a business environment -- and that includes IT. And so IT needs to deliver things more quickly than they used to. They can’t just use the old techniques; they need to move to much more automated approaches. And that means they need to take work out of their operational environments.

Gardner: What’s driving the complexity that makes such automation beneficial?

IT means business 

Guida: It again starts from the business. IT used to be a support function, to support business processes. So, it could go along on its own time scale. There wasn’t much that the business could or would do about it.

Guida
In 2020, technology is now part of the fabric of most of the products, services, and experiences that businesses offer. So when technology is part of an offering, all of a sudden technology is how a business is differentiated. As part of how a business is differentiated, business leaders are not going to take, “Oh, we will get to it in 18 months,” as an answer. If that’s the answer they get from the IT department, they are going to go look for other ways of getting things done.

And with the advances of public cloud technology, there are other ways of getting things done that don’t come from an internal IT department. So IT organizations need to be able to keep up with the pace of business change, because businesses aren’t going to accept their historical time scale.

Gardner: Does accelerating IT via automation require an ecosystem of partners, or is there one tool that rules them all?

Guida: This is not a one-size-fits-all world. I talk to customers in our HPE Executive Briefing Centers regularly. The first thing I ask them is, “Tell me about the toolsets you have in your environment.” I often ask them about what kinds of automation toolsets they have. Do you have Terraform or Ansible or Chef or Puppet or vRealize Orchestrator or something else? It’s not uncommon for the answer to be, “Yes.” They have all of them.

So even within a customer’s environment, they don’t have a single tool. We need to work with all the toolsets that the customers have in their IT environments.

Gardner: It almost sounds like you are trying to automate the automation. Is that fair?

Guida: We definitely are trying to take some of the hard work that has historically gone into automation and make it much simpler.
Complexity is Growing in the Data Center
What's the Solution?
Gardner: IT operations complexity is probably only going to increase, because we are now talking about pushing compute operations -- and even micro data centers -- out to the edge in places like factories, vehicles, and medical environments, for example. Should we brace ourselves now for a continuing ramp-up of complexity and diversity when it comes to IT operations?

Guida: Oh, absolutely. You can’t have a single technology that’s going to answer everything. Is the end user going to interface through a short message service (SMS) or are they going to use a smartphone? Are they going to be on a browser? Is it an endpoint that interacts with a system that’s completely independent of any user base technology? All of this means that IT has to be multifaceted.

Even if we look at data center technologies, for the last 15 years virtualization has been pretty much the standard way that IT deploys new systems. Now, increasingly, organizations are looking at a set of applications that don’t run in virtual machines (VMs), but rather are container-based. That brings a whole other set of complexity they have to think about in their environments.
Complexity is like entropy; it just keeps growing. When we started thinking about bringing a lot more flexibility to on-premises data center environments, we looked holistically at the problem ... at a deeper level.

Complexity is like entropy; it just keeps growing. When we started thinking about bringing a lot more flexibility to on-premises data center environments, we looked holistically at the problem. I don’t think the problem can only be addressed through better automation; in fact, it has to be addressed at a deeper level.

And so with our composable infrastructure strategies, we thought architecturally about how we could bring the same kind of flexibility you have in a public cloud environment to on-premises data centers. We realized we needed a way to liberate IT beyond the boundaries of physical infrastructure by being able to group that physical infrastructure into pools of resources that could be much more fluid and where the physical aspects could be changed.

Now, there is some hardware infrastructure technology in that, but a lot of that magic is done through software, using software to configure things that used to be done in a physical manner.

https://community.hpe.com/t5/Shifting-to-Software-Defined/How-to-leverage-the-greatest-minds-in-the-world-in-your-own-data/ba-p/7031252#.XdgAk9VKiM8
So we defined a layer of software-defined intelligence that captures all of the things you need to know about configuring physical hardware -- whether it’s firmware levels or biased headings or connections. We define and calculate all of that in software.

And automation is the icing on that cake. Once you have your infrastructure that can be defined in software, you can program it. That’s where the automation comes in, being able to use everyday automation tools that organizations are already using to automate other parts of their IT environment and apply that to the physical infrastructure without a whole bunch of unnatural acts that were previously required if you wanted to automate physical infrastructure.

Gardner: Are we talking about a fundamental shift in how infrastructure should be conceived or thought of here?

Consolidate complexity via automation 

Guida: There has been a saying in the IT industry for a while about moving from pets to cattle. Now we even talk about thinking about herds. You can brute-force that transition by trying to automate to all of the low-level application programing interfaces (APIs) in physical infrastructure today. Most infrastructure today is programmable, with rare exceptions.

But then you as the organization are doing the automation, and you must internalize that and make your automation account for all of the logic. For example, if you then make a change in the storage configuration, what does that mean for the way the network needs to be configured? What does that mean for firmware settings? You would have to maintain all of that in your own automation logic.
How to Simplify and Automate
Your Data Center
There are some organizations in the world that have the scale of automation engineering to be able to do that. But the vast majority of enterprises don’t have that capability. And so what we do with composable infrastructure, HPE OneView, and our partner ecosystem is we actually encapsulate all of that in our software to find intelligence. So all you have to do is take that configuration file and apply it to a set of physical hardware. It brings things that used to be extremely complex down to what a standard IT organization has the capabilities of doing today.


Gardner: And not only is that automation going to appeal to the enterprise IT organizations, it’s also going to appeal to the ecosystem of partners. They now have the means to use the composable infrastructure to create new value-added services.

How does HPE’s composability benefit both the end-user organizations and the development of the partner ecosystem?

Guida: When I began the composable ecosystem program, we actually had two or three partners. This was about four years ago. We have now grown to more than 30 different integrations in place today, with many more partners that we are talking to. And those range from the big, everyday names like VMware and Microsoft to smaller companies that may be present in only a particular geography.

https://www.hpe.com/us/en/home.html
But what gets them excited is that, all of a sudden, they are able to bring better value to their customers. They are able to deliver, for example, an integrated monitoring system. Or maybe they are already doing application monitoring, and all of a sudden they can add infrastructure monitoring. Or they may already be doing facilities management, managing the power and cooling, and all of a sudden they get a whole bunch of data that used to be hard to put in one place. Now they can get a whole bunch of data on the thermals, of what’s really going on at the infrastructure level. It’s definitely very exciting for them.

Gardner: What jumps out at you as a good example of taking advantage of what composable infrastructure can do?

Guida: The most frequent conversations I have with customers today begin with basic automation. They have many tools in their environment; I mentioned many of them earlier: Ansible, Terraform, Chef, Puppet, or even just PowerShell or Python; or in the VMware environment, vRealize Orchestrator.

They have these tools and really appreciate what we have been able to do with publishing these integrations on GitHub, for example, of having a community, and having direct support back to our engineers who are doing this work. They are able to pretty straightforwardly add that into their tools environment.
How a Software-Defined Data Center
Lets the Smartest People Work for You
And we at HPE have also done some of the work ourselves in the open source tools projects. Pretty much every automation tool that’s out there in mainstream use by IT -- we can handle it. That’s where a lot of the conversations we have with customers begin.

If they don’t begin there, they start back in basic IT operations. One of the ways people take advantage of the automation in HPE OneView -- but they don’t realize they are taking advantage of automation -- is in how OneView helps them integrate their physical infrastructure into a VMware vCenter or a Microsoft System Center environment.

Visualize everything, automatically 

For example, in a VMware vCenter environment, an administrator can use our plug-in and it automatically sucks in all of the data from their physical infrastructure that’s relevant to their VMware environment. They can see things in their vCenter environment that they otherwise couldn’t see.

They can see everything from a VM that’s sitting on the VM host that’s connected through the host bus adapters (HBAs) out to the storage array. There is the logical volume. And they can very easily visualize the entire logical as well as physical environment. That’s automation, but you are not necessarily perceiving it as automation. You are perceiving it as simply making an IT operations environment a lot easier to use.
The automation benefits -- instead of just going down into the IT operations -- can also go up to allow more cloud management. It affects infrastructure and applications.

For that level of IT operations integration, VMware and Microsoft environments are the poster children. But for other tools, like Micro Focus and some of the capacity planning tools, and event management tools like ServiceNow – those are another big use case category.

The automation benefits – instead of just going down into the IT operations – can also go up to allow more cloud management. Another way IT organizations take advantage of the HPE automation ecosystem means, “Okay, it’s great that you can automate a piece of physical infrastructure, but what I really need to do -- and what I really care about -- is automating a service. I want to be able to provision my SQL database server that’s in the cloud.”

That not only affects infrastructure pieces, it touches a bunch of application pieces, too. Organizations want it all done through a self-service portal. So we have a number of partners who enable that.

Morpheus comes to mind. We have quite a lot of engagements today with customers who are looking at Morpheus as a cloud management platform and taking advantage of how they can not only provision the logical aspects of their cloud, but also the physical ones through all of the integrations that we have done.
How to Simplify, Automate, and
Develop Faster
Gardner: How does HPE and the partner ecosystem automate the automation, given the complexity that comes with the newer hybrid deployment models? Is that what HPE OneView is designed to help do these days?

Automatic, systematic, cost-saving habit 

Guida: I want to talk about a customer who is an online retailer. If you think about the retail world -- obviously a highly dynamic world and technology is at the very forefront of the product that they deliver; technology is the product that they deliver.

They have a very creative marketing department that is always looking for new ways to connect to their customers. That marketing department has access to a set of application developers who are developing new widgets, new ways of connecting with customers. Some of those developers like to develop in VMs, which is more old school; some of the developers are more new school and they prefer container-based environments.

The challenge the IT department has is that from one week to the next they don’t fully know how much of their capacity needs to be dedicated to a VM versus a container environment. It all depends on which promotions or programs the business decides it wants to run at any time.

So the IT organization needed a way to quickly switch an individual VM host server to be reconfigured as a bare-metal container host. They didn’t want to pay a VM tax on their container host. They identified that if they were going to do that manually, there were dozens and dozens -- I think they had 36 or 37 -- steps that they needed to do. And they could not figure out a way to automate individually each one of those 37 steps.
When we brought them an HPE Synergy infrastructure -- managed by OneView, automated by Ansible -- they instantly saw how that was going to help solve their problems. They were able to change their environemnt from one personality to another in a completely automated fashion.

When we brought them an HPE Synergy infrastructure -- managed by OneView, automated with Ansible -- they instantly saw how that was going to help solve their problems. They were going to be able to change their environment from one personality to another personality in a completely automated fashion. And now they are able to do that changeover in just 30 minutes, and instead of needing dozens of manual steps. They have zero manual steps; everything is fully automated.

And that enables them to respond to the business requirements. The business needs to be able to run whatever programs and promotions it is that they want to run -- and they can’t be constrained by IT. Maybe that gives a picture of how valuable this is to our customers.

Gardner: Yes, it speaks to the business outcomes, which are agility and speed, and at the same time the IT economics are impacted there as well.

Speaking of IT economics and IT automation, we have been talking in terms of process and technology. But businesses are also seeking to simplify and automate the economics of how they acquire and spend on IT, perhaps more on a pay-per-use basis.

Is there alignment between what you are doing in automation and what HPE is doing with HPE GreenLake? Do the economics and automation reinforce one another?
How to Drive Innovation and
Automation in Your Data Center
Guida: Oh, absolutely. We bring physical infrastructure flexibility, and HPE GreenLake brings financial flexibility. Those go hand in hand. In fact, the example that I was just speaking about, the online retailer, they are very, very busy during the Christmas shopping season. They are also busy for Valentine’s Day, Mother’s Day, and back-to-school shopping. But they also have times where they are much less busy.

They have HPE GreenLake integrated into their environment so in addition to having the physical flexibility in their environment, they are financially aligning through a flexible capacity program and paying for technology -- in the way that their business model works. So, these things go hand-in-hand.

https://www.hpe.com/us/en/services/flexible-capacity.html?chatsrc=ot-en&jumpid=ps_muqbvc5xh2_aid-510455007&gclid=EAIaIQobChMIgbTwgZr-5QIViLzACh0c8AkNEAAYASAAEgLi_fD_BwE&gclsrc=aw.ds

As I said earlier, I talk to a lot of HPE customers because I am based in the San Francisco Bay Area where we have our corporate headquarters. I am frequently in our Executive Briefing Center two to three times a week. There are almost no conversations I am part of that don’t lead eventually to the financial aspects, as well as the technical aspect, of how all the technology works.

Gardner: Because we have opened IT automation up to the programmatic level, a new breed of innovation can be further brought to bear. Once people get their hands on these tools and start to automate, what have you seen on the innovation side? What have people started doing with this that you maybe didn’t even think they would do when you designed the products?

Single infrastructure signals innovation 

Guida: Well, I don’t know that we didn’t think about this, but one of the things we have been able to do is make something that the IT industry has been talking about for a while in an on-premises IT environment.

There are lots of organizations that have IT capacity that is only used some of the time. A classic example is an engineering organization that provides a virtual desktop infrastructure (VDI) capability for engineers. These engineers need a bunch of analytics applications -- maybe it’s genomic engineering, seismic engineering, or fluid dynamics in the automotive industry. They have multiple needs. Typically they have been running those on different sets of physical infrastructures.

With our automation, we can enable them to collapse that all into one set of infrastructure, which means they can be much more financially efficient. Because they are more financially efficient on the IT side, they are able to then devote more of their dollars to driving innovation -- finding new ways of discovering oil and gas under the ground, new ways of making automobiles much more efficient, or uncovering new secrets within our DNA. By spending less on their IT infrastructure, they are able to spend more on what their core business innovation should be.

Gardner: Frances, I have seen other vendors approach automation with a tradeoff. They say, “Well, if you only use our cloud, it’s automated. If you only use our hypervisor, it’s automated. If you only use our database, it’s automated.”

But HPE has taken a different tack. You have looked at heterogeneity as the norm and the complexity as a result of heterogeneity as what automation needs to focus on. How far ahead is HPE on composability and automation? How differentiated are you from others who have put a tradeoff in place when it comes to solving automation?
We have had composable infrastructure on the market for three-plus years. Our HPE Synergy platform now has a $1 billion run rate. We have 3,600 customers around the world. It's been a tremendously successful business for us.

Guida: We have had composable infrastructure on the market for three-plus years now. Our HPE Synergy platform, for example, now has a more than $1 billion run rate for HPE. We have 3,600 customers and counting around the world. It’s been a tremendously successful business for us.

I find it interesting that we don’t see a lot of activity out there, of people trying to mimic or imitate what we have done. So I expect composability and automation will remain fundamentally differentiating for us from many of our traditional on-premises infrastructure competitors.

It positions us very well to provide an alternative for organizations who like the flexibility of cloud services but prefer to have them in their on-premises environments. It’s been tremendously differentiating for us. I am not seeing anyone else who has anything coming on hot in any way.

Gardner: Let’s take a look to the future. Increasingly, not only are companies looking to become data-driven, but IT organizations are also seeking to become data-driven. As we gather more data and inference, we start to be predictive in optimizing IT operations.

I am, of course, speaking of AIOps. What does that bring to the equation around automation and composability? How will AIOps change this in the coming couple of years?

Automation innovation in sight with AIOps 

Guida: That’s a real opportunity for further innovation in the industry. We are at the very early stages about how we take advantage in a symptomatic way of all of the insights that we can derive from knowing what is actually happening within our IT environments and mining those insights. Once we have mined those insights, it creates the possibility for us to take automation to another level.

We have been throwing around terms like self-healing for a couple of decades, but a lot of organizations are not yet ready for something like self-healing infrastructure. There is a lot of complexity within our environments. And when you put that into a broader heterogeneous data center environment, there is even more complexity. So there is some trepidation.
How to Accelerate to
A Self-Driving Data Center
Over time, for sure, the industry will get there. We will be forced to get there because we are going to be able to do that in other execution venues like the public cloud. So the industry will get there. The whole notion of what we have done with automation of composable infrastructure is absolutely a great foundation for us as we take our customers toward these next journeys around automation.


Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

No comments:

Post a Comment