Wednesday, July 25, 2018

How HPE and Docker together accelerate and automate hybrid cloud adoption

The next BriefingsDirect hybrid cloud strategies discussion examines how the use of containers has moved from developer infatuation to mainstream enterprise adoption.

As part of the wave of interest in containerization technology, Docker, Inc. has emerged as a leader in the field and has greased the skids for management and ease of use.

Meanwhile, Hewlett Packard Enterprise (HPE) has embraced containers as a way to move beyond legacy virtualization and to provide both developers and IT operators more choice and efficiency as they seek new hybrid cloud deployment scenarios.

Like the proverbial chocolate and peanut butter coming together -- or as I like to say, with Docker and HPE, fish and chips -- the two make a highly productive alliance and cloud ecosystem tag team.

Listen to the podcast. Find it on iTunes. Get the mobile app.  Read a full transcript or download a copy.

Here to describe exactly how the Docker and HPE alliance accelerates modern and agile hybrid architectures, we are joined by two executives, Betty Junod, Senior Director of Product and Partner Marketing at Docker, and Jeff Carlat, Senior Director of Global Alliances at HPE. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jeff, how do containers -- and how does Docker specifically -- help data center architects achieve their goals?

Carlat: When you look at the advent of where technology has gone, through virtualization of applications, we are moving into a whole new era where we need much more agility in in applications -- and IT operations.

We believe that our modern infrastructure and our partnership with Docker -- specifically around containers and container orchestration -- provides businesses of all sizes much lower acquisition cost of deploying infrastructure, and ongoing operation costs. And, of course, the game from a business standpoint is all about driving profitability and shareholder stock value.

Second, there is huge value when it comes to Docker and containers around extending the life of legacy applications. Modernizing traditional apps and being able to extend their life and bring them forward to a new modern architecture -- that drives greater efficiencies and lower risk.

Gardner: Betty, how do you see the alignment between what HPE’s long-term vision for hybrid computing and edge-to-core computing and what Docker and containerization can do? How do these align?

Align your apps

Junod
Junod: It’s actually a wonderful alignment because what we look at from a Docker perspective is specifically at the application layer and bringing choice, agility, and security at the application layer in a way that can be married with what HPE is doing on the infrastructure layer across the hybrid cloud.

Our customers are saying, “We want to go to cloud, but we know the world is hybrid. We are going to be hybrid. So how do we do that in a way that doesn’t blow up all of our compliance if we make a change? Is this all for new apps? Or what do I do with all the stuff that I have accrued over the decades that’s eating into all of my budget?”

When it comes to transformation, it is not just an infrastructure story. It's not just an applications story. It's how do I use those two together in a way that's highly efficient and also very agile for managing the stuff I already have today. Can I make that cheaper, better, stronger -- and how do I enable the developers to build all the new services for the future that are going to provide more services, or better engage with my customers?

Gardner: How does DevOps, in particular, align? There is a lot of the developer allegiance to a Docker value proposition. But IT operators are also very much interested in what HPE is bringing to market, such as better management, better efficiency, and automation.

How are your two companies an accelerant to DevOps?

The future is Agile 

Junod: DevOps is interesting in that it's a word that's been used a lot, along with Agile development. It all stems from the desire for companies to be faster, right? They want to be faster in everything -- faster in delivering new services, faster in time-to-market, as well as faster in responses so they can deliver the best service-level agreements (SLAs) to the customer. It’s very much about how application teams and infrastructure teams work together.

What's great is that Docker brings the ability for developers and operations teams to have a common language, to be able to do their own thing on their timelines without messing up the other side of the house. No more of that Waterfall. Developers can keep developing, shipping, and not break something that the infrastructure teams have set up, and vice versa.
No more of that Waterfall. Developers can keep developing and shipping, and not break something that the infrastructure teams have set up.

Carlat: Let’s be clear, the world is moving to Agile. I mean, companies are delivering continuous releases and ongoing builds. Those companies that can adopt and embrace that are going to get a leg up on their competition and provide better service levels. So the DevOps community and what we are doing is a perfect match. What Docker and HPE are delivering is ideal for that Dev or the Ops environments.

Gardner: When you have the fungibility of moving workloads around the operators benefit, because they get to finally gain more choice about what keeps the trains running on time regardless of who is inside those trains, so to speak.

Let's look at some of the hurdles. What prevents organizations from adopting these hybrid cloud and containerization benefits? What else needs to happen?

Make hybrid happen 

Junod: One of the biggest things we hear from our customers is, “Where should I go when it comes to cloud, and how?” They want to make sure that what they do is future-proof. The want to spend their time being beholden to what their application and customer needs are -- and not specifically a cloud A or cloud B.
Learn more about the Docker
Enterprise Container Platform
Because with the new regulations regarding data privacy and data sovereignty, if you are a multinational company, your data sets are going to have to live in a bunch of different places. People want the ability to have things hybrid. But that presents an application and an infrastructure operational challenge.

What's great in our partnership is that we are saying we are going to provide you the safest way to do hybrid; the fastest way to get there. With the Docker layer on top of that, no matter what cloud you pick to marry with your HPE on-premises infrastructure, it’s seamless portability -- and you can have the same operational governance.

Carlat
Carlat: We also see enterprises, as they move to gain efficiencies, are on a journey. And the journey around containerization and containers in our modern infrastructure can be daunting at times.

One of the barriers, or prohibitions, to active adoption movement is complexity, of not knowing where to start. This is where we are partnering deeply; essentially around the services capabilities, to be able to bring in our consultative capabilities with Pointnext and do assessments and help customers establish that journey and get them through the maturity of testing and development, and progressing into full production-level environments.

Gardner: Is Cloud Technology Partners, a recent HPE acquisition, also a big plus given that they have been of, by, and for cloud -- and very heavily into containers?

Carlat: Yes. That snaps in naturally with the choice in our hybrid strategy. It's a great bridge, if you will, between what applications you may want on-premises and also using Cloud Technology Partners for leveraging an agnostic set of public cloud providers.

Gardner: Betty, when we think about adoption, sometimes too much of a good thing too soon can provide challenges. Is there anything about people adopting containers too rapidly without doing the groundwork -- the blocking and tackling, around management and orchestration, and even automation -- that becomes a negative? And how does HPE factor into that?

Too much transformation, too soon 

Junod: We have learned over these last few years, across 500 different customers, what does and doesn't work. It has a consistent pattern. The companies that say they want to do DevOps, and cloud, and microservices -- and they put all the buzzwords in – and they want to do it all right now for transformation -- those organizations tend to fail. That’s because it's too much change at once, like you mentioned.

What we have worked out by collaborating tightly with our partners as well as our customers is that we say, “Pick one, and maybe not the most complicated application you have. Because you might be deploying on a new infrastructure. You are using a new container model. You are going to need to evolve some of your processes internally.”

And if you are going to do hybrid, when is it hybrid? Is it during the development and test in the cloud, and then to on-premises for production? Or is it cloud bursting for scale up? Or is it for failover replication? If you don't have some of that sorted out before you go, well, then you are just stuck with too much stuff, too much of a good thing.
The companies that say they want to do DevOps, cloud, microservices, and do it all right now -- those organizations tend to fail.

What we have partnered with HPE on -- and especially HPE Pointnext from a services standpoint -- is very much an advisory role, to say let's look at your landscape of applications that you have today and let's assess them. Let’s put them in buckets for you and we can pick one or two to start with. Then, let’s outline what’s going to happen with those. How does this inform your new platform choices?

And then once we get some of those kinks worked out and try some of the operational processes that evolve, then after that it’s almost like a factory. They can just start funneling more in.

Gardner: Jeff, lot of what HPE has been doing is around management and monitoring, governance, being mindful of security and compliance issues. So things like HPE Synergy, things like HPE OneView that have been in the market for a long time, and newer products like HPE OneSphere, how are they factoring into allowing containers to be what they should be without getting out of control?

Hand in glove

Carlat: We have seen containerization evolve. And the modern architectures such as HPE Synergy and OneView are designed and built for bare metal deployment or containers or virtualization. It's all designed -- you say, it's like fish and chips, or it's like a hand in glove in my analogy – to allow customers choice, agility, and flexibility.

Our modern infrastructure is not purely designed for containers. We see a lot of virtualization, and Docker runs great in a virtualized environment as well. So it’s not one or the other. So again, it's like a hand in glove.

Gardner: By the way, I know that the Docker whale isn’t technically a fish, but I like to use it anyway.

Let's talk about the rapid adoption now around hyperconverged infrastructure (HCI). How is HCI helping move forward hybrid cloud and particularly for you on the Docker side? Are you seeing it as an accelerant?
Learn how to extend Docker containers
Across your entire enterprise
Junod: What you are seeing with some of the hyperconverged -- and especially if you relate that over to what's going on with the adoption of containers -- it's all about agility. They want speed and they want to be able to spin things out fast, whether it's compute resources or whether it's application resources. I think it's a nice marriage of where the entire industry wants to go, and what companies are looking for to deliver services faster to our customers.

Carlat: Specifically, hyperconverged represents one of the fastest growing segments in the market for us. And the folks that are adopting hyperconverged clearly want the choice, agility, and rapid simplicity -- and rapid deployment -- of their applications.

Where we are partnering with Docker is taking HPE SimpliVity, our hyperconverged infrastructure, in building out solutions for either test or development and using scripting to be able to deploy this all in a complete environment in 30 minutes or less.

Yes, we are perfectly aligned, and we see hyperconverged as a great area for dropping in infrastructure and testing and development, as well as for midsize IT environments.

Gardner: Recently DockerCon wrapped up. Betty, what was some of the big news there, and how has that had an impact on going to market with a partner like HPE?

Choice, Agility, Security 

Junod: At DockerCon we reemphasized our core pillars: choice, agility, and security, because it's choice in what you want to build. You should as an organization be able to build the best applications with the best components that you feel are right for your application -- and then be able to run that anywhere, in whatever scenario.

Agility is really around speed for delivering new applications, as well as speed for operations teams. Back to DevOps, those two sides have to exist together and in partnership. One can't be fast and the other slow. We want to enable both to be fast together.

And lastly, security. It's really about driving security throughout the lifecycle, from development to production. We want to make sure that we have security built into the entire stack that's supporting the application.
Organizations should be able to build the best applications with the best components and run them anywhere, in any scenario.

We just advanced the platform along those lines. Docker Enterprise Edition 2.0 really started a couple of months ago, so 2.0 is out. But we announced as part of that some technology preview capabilities. We introduced the integration of Kubernetes, which is a very popular container orchestration engine, to allow into our core Enterprise Edition platform and then we added being able to do that all with Windows as well.

So back to choice; it's a Linux and Windows world. You should be able to use any orchestration you like as part of that.

No more kicking the tires 

Carlat: One thing I really noticed at DockerCon was not necessarily just about what Docker did, but the significance of major enterprises -- Fortune 500, Fortune 100 enterprises – that are truly pivoting to the use of containers and Docker specifically on HPE.

No longer are they kicking the tires and evaluating. We are seeing full-scale production roll outs in major, major, major enterprises. The time is right for customers to modernize, embrace, and adopt containers and container orchestration and drop that onto a modern infrastructure or architecture. They can then gain the benefits of the efficiencies, agility, and the security that we have talked about. That is paramount.
Learn more about the Docker
Enterprise Container Platform
Gardner: Along those lines, do you have examples that show how the combination of what HPE brings to the table and what Docker brings to the table combine in a way that satisfies significant requirements and needs in the market?

Junod: I can highlight two customers. One is Bosch, a major manufacturer in Europe, as well as DaVita healthcare.

What’s interesting is that Bosch began with a lot of organic use of Docker by their developers, spread all over the place. But they said, “Hang on a second, because developers are working with corporate intellectual property (IP), we need to find a way to centralize that, so it better scales for them -- and it’s also secure for us.”

This is one of the first accounts that Docker and HPE worked on together to bring them an integrated solution. They implemented a new development pipeline. Central IT at Bosch is doing the governance, management, and the security around the images and content. But each application development team, no matter where they are around the world, is able to spin up their own separate clusters and then be able to do the development and continuous integration on their own, and then publish the software to a centralized pipeline.

Containers at the intelligent edge 

Carlat: There are use cases across the board and in all industry verticals; healthcare, manufacturing. We are seeing strong interest in adoption outside of the data center and we call that the intelligent edge.

We see that containers, and containers-as-a-service, are joining more compute, data, and analytics at the edge. As we move forward, the same level of choice, agility, and security there is paramount. We see containers as a perfect complement, if you will, at the edge.

Gardner: Right; bringing down the necessary runtime for those edge apps -- but not any more than the necessary runtime. Let’s unpack that a little bit. What is it about container and edge devices, like an HPE Edgeline server, for example, that makes so much sense?

Junod: There is a broad spectrum on the edge. You will have like things like remote offices and retail locations. You will also see things like Industrial Internet of Things (IIoT). There you have very small devices for data ingest that feed into a distributed server that then ultimately feeds into the core, or the cloud, to do large-scale data analytics. Together this provides real-time insights, and this is an area we have been partnering and working with some of our customers on right now.

Security is actually paramount because -- if you start thinking about the data ingest devices -- we are not talking about, “Oh, hey, I have 100 small offices.” We are talking about millions and millions of very small devices out there that need to run a workload. They have minimal compute resources and they are going to run one or two workloads to collect data. If not sufficiently secured, they can be risk areas for attack.

So, what's really important from a Docker perspective is the security; integrated security that goes from the core -- all the way to the edge. Our ability, from a software layer, to provide trusted transport and digital signatures and the locking down of the runtime along the way means that these tiny sensor devices have one container on them. And it's been encrypted and locked with keys that can’t be attacked.
Learn how to extend Docker containers
Across your entire enterprise
That’s very important, because now if someone did attack, they could also start getting access into the network. So security is even more paramount as you get closer to the edge.

Gardner: Any other forward-looking implications for your alliance? What should we be thinking about in terms of analyzing that data and bringing machine learning (ML) to the edge? Is there something that between your two companies will help facilitate that?

Carlat: The world of containers and agile cloud-native applications is not going away. When I think about the future, enterprises need to pivot. Yet change is hard for all enterprises, and they need help.

They are likely going to turn to trusted partners. HPE and Docker are perfectly aligned, we have been bellwethers in the industry, and we will be there to help on that journey.

Gardner: Yes, this seems like a long-term relationship. 

Listen to the podcast. Find it on iTunes. Get the mobile app.  Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Friday, June 15, 2018

Legacy IT evolves: How cloud choices like Microsoft Azure can conquer the VMware Tax

The next BriefingsDirect panel discussion explores cloud adoption strategies that can simplify IT operations, provide cloud deployment choice -- and that make the most total economic sense.

Many data center operators face a crossroads now as they consider the strategic implications of new demands on their IT infrastructure and the new choices that they have when it comes to a cloud continuum of deployment options. These hybrid choices span not only cloud hosts and providers, but also platform technologies such as containers, intelligent network fabrics, serverless computing, and, yes, even good old bare metal.

For thousands of companies, the evaluation of their cloud choices also impacts how they on can help conquer the “VMware tax” by moving beyond a traditional server virtualization legacy.

The complexity of choice goes further because long-term decisions about technology must also include implications for long-term recurring costs -- as well as business continuity. As IT architects and operators seek to best map a future from a VMware hypervisor and traditional data center architecture, they also need to consider openness and lock-in.


Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Our panelists review how public cloud providers and managed service providers (MSPs)
are sweetening the deal to transition to predicable hybrid cloud models. The discussion is designed to help IT leaders to find the right trade-offs and the best rationale for making the strategic decisions for their organization's digital transformation.

The panel consists of David Grimes, Vice President of Engineering at Navisite; David Linthicum, Chief Cloud Strategy Officer at Deloitte Consulting, and Tim Crawford, CIO Strategic Advisor at AVOA. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Clearly, over the past decade or two, countless virtual machines have been spun up to redefine data center operations and economics. And as server and storage virtualization were growing dominant, VMware was crowned -- and continues to remain -- a virtualization market leader. The virtualization path broadened over time from hypervisor adoption to platform management, network virtualization, and private cloud models. There have been a great many good reasons for people to exploit virtualization and adopt more of a software-defined data center (SDDC) architecture. And that brings us to where we are today.

Dominance in virtualization, however, has not translated into an automatic path from virtualization to a public-private cloud continuum. Now, we are at a crossroads, specifically for the economics of hybrid cloud models. Pay-as-you-go consumption models have forced a reckoning on examining your virtual machine past, present, and future.


My first question to the panel is ... What are you now seeing as the top drivers for people to reevaluate their enterprise IT architecture path?

The cloud-migration challenge

Grimes: It's a really good question. As you articulated it, VMware radically transformed the way we think about deploying and managing IT infrastructure, but cloud has again redefined all of that. And the things you point out are exactly what many businesses face today, which is supporting a set of existing applications that run the business. In most cases they run on very traditional infrastructure models, but they're looking at what cloud now offers them in terms of being able to reinvent that application portfolio.

Grimes
But that's going to be a multiyear journey in most cases. One of the things that I think about as the next wave of transformation takes place is how do we enable development in these new models, such as containers and serverless, and using all of the platform services of the hyperscale cloud. How do we bring those to the enterprise in a way that will keep them adjacent to the workloads? Separating off in the application and the data is very challenging.

Gardner: Dave, organizations would probably have it easier if they're just going to go from running their on-premises apps to a single public cloud provider. But more and more, we're quite aware that that's not an easy or even a possible shift. So, when organizations are thinking about the hybrid cloud model, and moving from traditional virtualization, what are some of the drivers to consider for making the right hybrid cloud model decision, where they can do both on-premises private cloud as well as public cloud?

Know what you have, know what you need

Linthicum: It really comes down to the profiles of the workloads, the databases, and the data that you're trying to move. And one of the things that I tell clients is that cloud is not necessarily something that's automatic. Typically, they are going to be doing something that may be even more complex than they have currently. But let's look at the profiles of the existing workloads and the data -- including security, governance needs, what you're running, what platforms you need to move to -- and that really kind of dictates which resources we want to put them on.


Linthicum
As an architect, when I look at the resources out there, I see traditional systems, I see private clouds, virtualization -- such as VMware -- and then the public cloud providers. And many times, the choice is going to be all four. And having pragmatic hybrid clouds, which are paired with traditional systems and private and public clouds -- means multiple clouds at the same time. And so, this really becomes an analysis in terms of how you're going to look at the existing as-is state. And the to-be state is really just a functional matter of what the to-be state should be based on the business requirements that you see. So, it's a little easier than I think most people think, but I think the outcome is typically going to be more expensive and more complex than they originally anticipated.

Gardner: Tim Crawford, do people under-appreciate the complexity of moving from a highly virtualized on-premises, traditional data center to hybrid cloud?

Crawford: Yes, absolutely. Dave's right. There are a lot of assumptions that we take as IT professionals and we bring them to cloud, and then find that those assumptions kind of fall flat on their face. Many of the myths and misnomers of cloud start to rear their ugly heads. And that's not to say that cloud is bad; cloud is great. But we have to be able to use it in a meaningful way, and that's a very different way than how we've operated our corporate data centers for the last 20, 30, or 40 years. It's almost better if we forget what we've learned over the last 20-plus years and just start anew, so we don't bring forward some of those assumptions.

Crawford
And I want to touch on something else that I think is really important here, which has nothing to do with technology but has to do with organization and culture, and some of the other drivers that go into why enterprises are leveraging cloud today. And that is that the world is changing around us. Our customers are changing, the speed in which we have to respond to demand and need is changing, and our traditional corporate data center stacks just aren't designed to be able to make those kinds of shifts.

And so that's why it’s going to be a mix of cloud and corporate data centers. We're going to be spread across these different modes like peanut butter in a way. But having the flexibility, as Dave said, to leverage the right solution for the right application is really, really important. Cloud presents a new model because our needs have not been able to be fulfilled in the past.

Gardner: David Grimes, application developers helped drive initial cloud adoption. These were new apps and workloads of, by, and for the cloud. But when we go to enterprises that have a large on-premises virtualization legacy -- and are paying high costs as a result -- how frequently are we seeing people move existing workloads into a cloud, private or public? Is that gaining traction now?

Lift and shift the workload

Grimes: It absolutely is. That's really been a core part of our business for a while now, certainly the ability to lift and shift out of the enterprise data center. As Dave said, the workload is the critical factor. You always need to understand the workload to know which platform to put it on. That's a given. With a lot of that existing legacy application stacks running in traditional infrastructure models, very often those get lifted and shifted into a like-model -- but in a hosting provider's data center. That’s because many CIOs have a mandate to close down enterprise data centers and move to the cloud. But that does, of course, mean a lot of different things.

You mentioned the push by developers to get into the cloud, and really that was what I was alluding to in my earlier comments. Such a reinventing of the enterprise application portfolio has often been led by the development that takes place within the organization. Then, of course, there are all of the new capabilities offered by the hyperscale clouds -- all of them, but notably some of the higher-level services offered by Azure, for example. You're going to end up in a scenario where you've got workloads that best fit in the cloud because they're based on the services that are now natively embodied and delivered as-a-service by those cloud platforms.

But you're going to still have that legacy stack that still needs to leave the enterprise data center. So, the hybrid models are prevailing, and I believe will continue to prevail. And that's reflected in Microsoft's move with Azure Stack, of making much of the Azure platform available to hosting providers to deliver private Azure in a way that can engage and interact with the hyperscale Azure cloud. And with that, you can position the right workloads in the right environment.

Gardner: Now that we're into the era of lift and shift, let's look at some of the top reasons why. We will ask our audience what their top reasons are for moving off of legacy environments like VMware. But first let’s learn more about our panelists. David Grimes, tell us about your role at Navisite and more about Navisite itself.

Panelist profiles

Grimes: I've been with Navisite for 23 years, really most of my career. As VP of Engineering, I run our product engineering function. I do a lot of the evangelism for the organization. Navisite's a part of Spectrum Enterprise, which is the enterprise division of Charter. We deliver voice, video, and data services to the enterprise client base of Navisite, and also deliver cloud services to that same base. It's been a very interesting 20-plus years to see the continued evolution of managed infrastructure delivery models rapidly accelerating to where we are today.

Gardner: Dave Linthicum, tell us a bit about yourself, particularly what you're doing now at Deloitte Consulting.
It's been a very interesting 20-plus years to see the continued evolution of managed infrastructure delivery models.
Linthicum: I've been with Deloitte Consulting for six months. I'm the Chief Cloud Strategy Officer, the thought leadership guy, trying to figure out where the cloud computing ball is going to be kicked and what the clients are doing, what's going to be important in the years to come. Prior to that I was with Cloud Technology Partners. We sold that to Hewlett Packard Enterprise (HPE) last year. I’ve written 13 books. And I do the cloud blog on InfoWorld, and also do a lot of radio and TV. And the podcast, Dana.

Gardner: Yes, of course. You've been doing that podcast for quite a while. Tim Crawford, tell us about yourself and AVOA.

Crawford: After spending 20-odd years within the rank and file of the IT organization, also as a CIO, I bring a unique perspective to the conversation, especially about transformational organizations. I work with Fortune 250 companies, many of the Fortune 50 companies, in terms of their transformation, mostly business transformation. I help them explore how technology fits into that, but I also help them along their journey in understanding the difference between the traditional and transformational. Like Dave, I do a lot of speaking, a fair amount of writing and, of course, with that comes with travel and meeting a lot of great folks through my journeys.

Survey says: It’s economics

Gardner: Let's now look at our first audience survey results. I'd like to add that this is not scientific. This is really an anecdotal look at where our particular audience is in terms of their journey. What are their top reasons for moving off of legacy environments like VMware?

The top reason, at 75 percent, is a desire to move to a pay-as-you-go versus a cyclical CapEx model. So, the economics here are driving the move from traditional to cloud. They're also looking to get off of dated software and hardware infrastructure. A lot of people are running old hardware, it's not that efficient, can be costly to maintain and in some cases, difficult or impossible, to replace. There is a tie at 50 percent each in concern about the total cost of ownership, probably trying to get that down, and a desire to consolidate and integrate more apps and data, so seeking a transformation of their apps and data.

Coming up on the lower end of their motivations are complexity and support difficulties, and the developer preference for cloud models. So, the economics are driving this shift. That should come as no surprise, Tim, that a lot of people are under pressure to do more with less and to modernize at the same time. The proverbial changing of the wings of the airplane while keeping it flying. Is there any more you would offer in terms of the economic drivers for why people should consider going from a traditional data center to a hybrid IT environment?

Crawford: It's not surprising, and the reason I say that is this economic upheaval actually started about 10 years ago when we really felt that economic downturn. It caused a number of organizations to say, "Look, we don't have the money to be able to upgrade or replace equipment on our regular cycles."

And so instead of having a four-year cycle for servers, or a five-year cycle for storage, or in some cases as much as 10-plus cycle for network -- they started kicking that can down the road. When the economic situation improved, rather than put money back into infrastructure, people started to ask, "Are there other approaches that we can take?" Now, at the same time, cloud was really beginning to mature and become a viable solution, especially for mid- to large- enterprises. And so, the combination of those two opened the door to a different possibility that didn't have to do with replacing the hardware in corporate data centers.
Instead of having a four-year cycle for servers or five-year cycle for storage, they started kicking the can down the road.

And then you have the third piece to that trifecta, which are the overall business demands. We saw a very significant change in customer buying behavior at the same time, which is people were looking for things now. We saw the uptick of Amazon use and away from traditional retail, and that trend really kicked into gear around the same time. All of these together lead into this shift to demand for a different kind of model, looking at OpEx versus CapEx.

Gardner: Dave, you and I have talked about this a lot over the past 10 years, economics being a driver. But you don't necessarily always save money by going to cloud. To me, what I see in these results is not just seeking lower total cost -- but simplification, consolidation and rationalization for what enterprises do spend on IT. Does that make sense and is that reflected in your practice?

Savings, strategy and speed

Linthicum: Yes, it is, and I think that the primary reason for moving to the cloud has morphed in the last five years from the CapEx saving money, operational savings model into the need for strategic value. That means gaining agility, ability to scale your systems up as you need to, to adjust to the needs of the business in the quickest way -- and be able to keep up with the speed of change.
A lot of the Global 2,000 companies out there are having trouble maintaining change within the organization, to keep up with change in their markets. I think that's really going to be the death of a thousand cuts if they don't fix it. They're seeing cloud as an enabling technology to do that.

In other words, with cloud they can have the resources they need, they can get to the storage levels they need, they can manage the data that they need -- and do so at a price point that typically is going to be lower than the on-premise systems. That's why they're moving in that direction. But like we said earlier, in doing so they're moving into more complex models. They're typically going to be spending a bit more money, but the value of IT -- in its ability to delight the business in terms of new capabilities -- is going to be there. I think that's the core metric we need to consider.

Gardner: David, at Navisite, when it comes to cost balanced by the business value from IT, how does that play out in a managed hosting environment? Do you see organizations typically wanting to stick to what they do best, which is create apps, run business processes, and do data science, rather than run IT systems in and out of every refresh cycle? How is this shaking out in the managed services business?

Grimes: That's exactly what I'm seeing. Companies are really moving toward focusing on their differentiation. Running infrastructure has become almost like having power delivered to your data center. You need it, it's part of the business, but it's rarely differentiating. So that's what we're seeing.
Running infrastructure has become almost like having power delivered to your data center. You need it, but its rarely differentiating.

One of the things in the survey results that does surprise me is the relatively low scoring for the operations complexity and support difficulties. With the pace of technology innovation happening, and even within VMware, within the enterprise context, but certainly within the context of the cloud platforms, Azure in particular, the skillsets to use those platforms, manage them effectively and take the biggest advantage of them are in exceedingly high demand. Many organizations are struggling to acquire and retain that talent. That's certainly been my experience in with dealing with my clients and prospects.

Gardner: Now that we know why people want to move, let's look at what it is that's preventing them from moving. What are the chief obstacles that are preventing those in our audience from moving off of a legacy environment like VMware?

There's more than just a technological decision here. Dell Technologies is the major controller of VMware, even with VMware being a publicly traded company. But Dell Technologies, in order to go private, had to incur enormous debt, still in the vicinity of $48 billion. There's been reports recently of a reverse merger, where VMware as a public company will take over Dell as a private company. The markets didn't necessarily go for that, and it creates a bit of confusion and concern in the market. So Dave, is this something IT operators and architects should concern themselves with when they're thinking about which direction to go?

Linthicum: Ultimately, we need to look at the health of the company we're buying hardware and software from in terms of their ability to be around over the next few years. The reality is that VMware, Dell, and [earlier Dell merger target] EMC are mega forces in terms of a legacy footprint in a majority of data centers. I really don't see any need to be concerned about the viability of that technology. And when I look at viability of companies, I look at the viability of the technology, which can be bought and sold, and the intellectual property can be traded off to other companies. I don't think the technology is going to go away, it's just too much of a cash cow. And the reality is, whoever owns VMware is going to be able to make a lot of money for a long period of time.

Gardner: Tim, should organizations be concerned in that they want to have independence as VMware customers and not get locked in to a hardware vendor or a storage vendor at the same time? Is there concern about VMware becoming too tightly controlled by Dell at some point?

Partnership prowess

Crawford: You always have to think about who it is that you're partnering with. These days when you make a purchase as an IT organization, you're really buying into a partnership, so you're buying into the vision and direction of that given company.

And I agree with Dave about Dell, EMC, and VMware in that they're going to be around for a long period of time. I don't think that's really the factor to be as concerned with. I think you have to look beyond that.

You have to look at what it is that your business needs, and how does that start to influence changes that you make organizationally in terms of where you focus your management and your staff. That means moving up the chain, if you will, and away from the underlying infrastructure and into applications and things closely tied to business advantage.

As you start to do that, you start to look at other opportunities beyond just virtualization. You start breaking down the silos, you start breaking down the components into smaller and smaller components -- and you look at the different modes of system delivery. That's really where cloud starts to play a role.

Gardner: Let's look now to our audience for what they see as important. What are the chief obstacles preventing you from moving off of a legacy virtualization environment? Again, the economics are quite prevalent in their responses.

By a majority, they are not sure that there's sufficient return on investment (ROI) benefits. They might be wondering why they should move at all. Their fear of a lock-in to a primary cloud model is also a concern. So, the economics and lock-in risk are high, not just from being stuck on a virtualization legacy -- but also concern about moving forward. Maybe they're like the deer in the headlights.
You have to look at what it is that your business needs, and how does that start to influence changes that you make organizationally, of where you focus your management and your staff.

The third concern, a close tie, are issues around compliance, security, and regulatory restrictions from moving to the cloud. Complexity and uncertainty that the migration process will be successful, are also of concern. They're worried about that lift and shift process.

They are less concerned about lack of support for moving from the C-Suite or business leadership, of not getting buy-in from the top. So … If it's working, don't fix it, I suppose, or at least don't break it. And the last issue of concern, very low, is that it’s still too soon to know which cloud choices are best.

So, it's not that they don't understand what's going on with cloud, they're concerned about risk, and complexity of staying is a concern -- but complexity of moving is nearly as big of a concern. David, anything in these results that jump out to you?

Feel the fear and migrate anyway

Grimes: Of those not being sure of the ROI benefits, that's been a common thread for quite some time in terms of looking at these cloud migrations. But in our experience, what I've seen are clients choosing to move to a VMware cloud hosted by Navisite. They ultimately end up unlocking the business agility of their cloud, even if they weren't 100 percent sure going into it that they would be able to.

But time and time again, moving away from the enterprise data center, repurposing the spend on IT resources to become more valuable to the business -- as opposed to the traditional keeping the lights on function -- has played out on a fairly regular basis.

I agree with the audience and the response here around the fear of lock-in. And it's not just lock-in from a basic deployment infrastructure perspective, it's fear of lock-in if you choose to take advantage of a cloud’s higher-level services, such as data analytics or all the different business things that are now as-a-service. If you buy into them, you certainly increase your ability to deliver. Your own pace of innovation can go through the roof -- but you're often then somewhat locked in.

You're buying into a particular service model, a set of APIs, et cetera. It's a form of lock-in. It is avoidable if you want to build in layers of abstraction, but it's not necessarily the end of the world either. As with everything, there are trade-offs. You're getting a lot of business value in your own ability to innovate and deliver quickly, yes, but it comes at the cost of some lock-in to a particular platform.

Gardner: Dave, what I'm seeing here is people explaining why hybrid is important to them, that they want to hedge their bets. All or nothing is too risky. Does that make sense to you, that what these results are telling us is that hybrid is the best model because you can spread that risk around?

IT in the balance between past and future

Linthicum: Yes, I think it does say that. I live this on a daily basis in terms of ROI benefits and concern about not having enough, and also the lock-in model. And the reality is that when you get to an as-is architecture state, it's going to be a variety -- as we mentioned earlier – of resources that we're going to leverage.

So, this is not all about taking traditional systems – and the application workloads around traditional systems -- and then moving them into the cloud and shutting down the traditional systems. That won't work. This is about a balance or modernization of technology. And if you look at that, all bets are on the table -- including traditional, including private cloud, and public cloud, and hybrid-based computing. Typically, it's going to be the best path to success at looking at all of that. But like I said, the solution's really going to be dependent on the requirements on the business and what we're looking at.

Going forward, these kinds of decisions are falling into a pattern, and I think that we're seeing that this is not necessarily going to be pure-cloud play. This is not necessarily going to be pure traditional play, or pure private cloud play. This is going to be a complex architecture that deals with a private and public cloud paired with traditional systems.

And so, people who do want to hedge their bets will do that around making the right decisions that they leverage the right resources for the appropriate task at hand. I think that's going to be the winning end-point. It's not necessarily moving to the platforms that we think are cool, or that we think can make us more money -- it's about localization of the workloads on the right platforms, to gain the right fit.

Gardner: From the last two survey result sets, it appears incumbent on legacy providers like VMware to try to get people to stay on their designated platform path. But at the same time, because of this inertia to shift, because of these many concerns, the hyperscalers like
Google Cloud, Microsoft Azure, and Amazon Web Services also need to sweeten their deals. What are these other cloud providers doing, David, when it comes to trying to assuage the enterprise concerns of moving wholesale to the cloud?

It's not moving to the platforms that we think are cool, or that can make us money, it's about localization of the workloads on the right platforms, to get the right fit.
Grimes: There are certainly those hyperscale players, but there are also a number of regional public cloud players in the form of the VMware partner ecosystem. And I think when we talk about public versus private, we also need to make a distinction between public hyperscale and public cloud that still could be VMware-based.

I think one interesting thing that ties back to my earlier comments is when you look at Microsoft Azure and their Azure Stack hybrid cloud strategy. If you flip that 180 degrees, and consider the VMware on AWS strategy, I think we'll continue to see that type of thing play out going forward. Both of those approaches actually reflect the need to be able to deliver the legacy enterprise workload in a way that is adjacent from an equivalence of technology as well as a latency perspective. Because one thing that's often overlooked is the need to examine the hybrid cloud deployment models via the acceptable latency between applications that are inherently integrated. That can often be a deal-breaker for a successful implementation.

What we'll see is this continued evolution of ensuring that we can solve what I see as a decade-forward problem. And that is, as organizations continue to reinvent their applications portfolio they must also evolve the way that they actually build and deliver applications while continuing to be able to operate their business based on the legacy stack that's driving day-to-day operations.

Moving solutions

Gardner: Our final survey question asks What are your current plans for moving apps and data from a legacy environment like VMware, from a traditional data center?
And two strong answers out of the offerings come out on top. Public clouds such as Microsoft Azure and Google Cloud, and a hybrid or multi-cloud approach. So again, they're looking at the public clouds as a way to get off of their traditional -- but they're looking not for just one or a lock-in, but they're looking at a hybrid or multi-cloud approach.

Coming up zero, surprisingly, is VMware on AWS, which you just mentioned, David. Private cloud hosted and private cloud on-premises both come up at about 25 percent, along with no plans to move. So, staying on-premises in a private cloud has traction for some, but for those that want to move to the dominant hyperscalers, a multi-cloud approach is clearly the favorite. 

Linthicum: I thought there would be a few that would pick VMware on AWS, but it looks like the audience doesn't necessarily see that that's going to be the solution. Everything else is not surprising. It's aligned with what we see in the marketplace right now. Public cloud movement to Azure, Google Cloud and then also the movement to complex clouds like hybrid and multi-cloud also seem to be the two trends worth seeing right now in the space, and this is reflective of that.

Gardner: Let's move our discussion on. It's time to define the right trade-offs and rationale when we think about these taxing choices. We know that people want to improve, they don't want to be locked in, they want good economics, and they're probably looking for a long-term solution.

Now that we've mentioned it several times, what is it about Azure and Azure Stack that provides appeal? Microsoft’s cloud model seems to be differentiated in the market, by offering both a public cloud component as well as an integrated – or adjacent -- private cloud component. There’s a path for people to come onto those from a variety of different deployment histories including, of course, a Microsoft environment -- but also a VMware environment. What should organizations be thinking about, what are the proper trade-offs, and what are the major concerns when it comes to picking the right hybrid and multi-cloud approach?

Strategic steps on the journey

Grimes: At the end of the day, it's ultimately a journey and that journey requires a lot of strategy upfront. It requires a lot of planning, and it requires selecting the right partner to help you through that journey.

Because whether you're planning an all-in on Azure, or an all-in on Google Cloud, or you want to stay on VMware but get out of the enterprise data center, as Dave has mentioned, the reality is everything is much more complex than it seems. And to maximize the value of the models and capabilities that are available today, you're almost necessarily going to end up in a hybrid deployment model -- and that means you're going to have a mix of technologies in play, a mix of skillsets required to support them.
Whether you're planning on an all-Azure or all-Google, or you want to stay on VMware, it's about getting out of the enterprise datacenter, and the reality is far more complex than it seems.

And so I think one of the key things that folks should do is consider carefully how they partner regardless of where they are in that journey, if they are on step one or step three, to continue that journey is going to be critical on selecting the right partner to help them.

Gardner: Dave, when you're looking at risk versus reward, cost versus benefits, when you're wanting to hedge bets, what is it about Microsoft Azure and Azure Stack in particular that help solve that? It seems to me that they've gone to great pains to anticipate the state of the market right now and to try to differentiate themselves. Is there something about the Microsoft approach that is, in fact, differentiated among the hyperscalers?

A seamless secret

Linthicum: The paired private and public cloud, with similar infrastructures and similar migration paths, and dynamic migration paths, meaning it could be workloads in between them -- at least this is the way that it's been described -- is going to be unique in the market. Kind of the dirty little secret.

It's going to be very difficult to port from a private cloud to a public cloud because most private clouds are typically not AWS and not Google, and they don't make private clouds. Therefore, you have to port your code between the two, just like you've had to port systems in the past. And the normal issues about refactoring and retesting, and all the other things, really come home to roost.

But Microsoft could have a product that provides a bit more of a seamless capability of doing that. And the great thing about that is I can really localize on whatever particular platform I'm looking at. And if I, for example, “mis-localize” or I misfit, then it's a relatively easy thing to move it from private to public or public to private. And this may be at a time where the market needs something like that, and I think that's what is unique about it in the space.

Gardner: Tim, what do you see as some of the trade-offs, and what is it about a public, private hybrid cloud that's architected to be just that -- that seemingly Microsoft has developed? Is that differentiating, or should people be thinking about this in a different way?

Crawford: I actually think it's significantly differentiating, especially when you consider the complexity that exists within the mass of the enterprise. You have different needs, and not all of those needs can be serviced by public cloud, not all of those needs can be serviced by private cloud.

There's a model that I use with clients to go through this, and it's something that I used when I led IT organizations. When you start to pick apart these pieces, you start to realize that some of your components are well-suited for software as a service (SaaS)-based alternatives, some of the components and applications and workloads are well-suited for public cloud, some are well-suited for private cloud.

A good example of that is if you have sovereignty issues, or compliance and regulatory issues. And then you'll have some applications that just aren't ready for cloud. You've mentioned lift and shift a number of times, and for those that have been down that path of lift and shift, they've also gotten burnt by that, too, in a number of ways.

And so, you have to be mindful of what applications go in what mode, and I think the fact that you have a product like Azure Stack and Azure being similar, that actually plays pretty well for an enterprise that's thinking about skillsets, thinking about your development cycles, thinking about architectures and not having to create, as Dave was mentioning, one for private cloud and a completely different one for public cloud. And if you get to a point where you want to move an application or workload, then you're having to completely redo it over again. So, I think that Microsoft combination is pretty unique, and will be really interesting for the average enterprise.

Gardner: From the managed service provider (MSP) perspective, at Navisite you have a large and established hosted VMware business, and you’re helping people transition and migrate. But you're also looking at the potential market opportunity for an Azure Stack and a hosted Azure Stack business. What is it for the managed hosting provider that might make Microsoft's approach differentiated?

A full-spectrum solution

Grimes: It comes down to what both Dave and Tim mentioned. Having a light stack and being able to be deployed in a private capacity, which also -- by the way -- affords the ability to use bare metal adjacency, is appealing. We haven't talked a lot about bare metal, but it is something that we see in practice quite often. There are bare metal workloads that need to be very adjacent, i.e. land adjacent, to the virtualization-friendly workloads.

Being able to have the combination of all three of those things is what makes AzureStack attractive to a hosting provider such as Navisite. With it, we can solve the full-spectrum of the needs of the client, covering bare metal, private cloud, and hyperscale public -- and really in a seamless way -- which is the key point.

Gardner: It's not often you can be as many things to as many people as that given the heterogeneity of things over the past and the difficult choices of the present.

We have been talking about these many cloud choices in the abstract. Let's now go to a concrete example. There's an organization called Ceridian. Tell us about how they solved their requirements problems?
Azure Stack is attractive to a hosting provider like Navisite. With it we can solve the full-spectrum of the needs of the client in a seamless way.

Grimes: Ceridian is a global human capital management company, global being a key point. They are growing like gangbusters and have been with Navisite for quite some time. It's been a very long journey.

But one thing about Ceridian is they have had a cloud-first strategy. They embraced the cloud very early. A lot of those barriers to entry that we saw, and have seen over the years, they looked at as opportunity, which I find very interesting.

Requirements around security and compliance are critical to them, but they also recognized that a SaaS provider that does a very small set of IT services -- delivering managed infrastructure with security and compliance -- is actually likely to be able to do that at least as effectively, if not more effectively, than doing it in-house, and at a competitive and compelling price point as well.

So some of their challenges really were around all the reasons that we see, that we talked about here today, and see as the drivers to adopting cloud. It's about enabling business agility. With the growth that they've experienced, they've needed to be able to react quickly and deploy quickly, and to leverage all the things that virtualization and now cloud enable for the enterprises. But again, as I mentioned before, they worked closely with a partner to maximize the value of the technologies and ensure that we're meeting their security and compliance needs and delivering everything from a managed infrastructure perspective.

Overcoming geographical barriers

One of the core challenges that they had with that growth was a need to expand into geographies where we don't currently operate our hosting facilities, so Navisite's hosting capabilities. In particular, they needed to expand into Australia. And so, what we were able to do through our partnership with Microsoft was basically deliver to them the managed infrastructure in a similar way.

This is actually an interesting use case in that they're running VMware-based cloud in our data center, but we were able to expand them into a managed Azure-delivered cloud locally out of Australia. Of course, one thing we didn't touch on today -- but is a driver in many of these decisions for global organizations -- is a lot of the data sovereignty and locality regulations are becoming increasingly important. Certainly, Microsoft is expanding the Azure platform. And so their presence in Australia has enabled us to deliver that for Ceridian.

As I think about the key takeaways and learnings from this particular example, Ceridian had a very clear, very well thought out cloud-centric and cloud-first strategy. You, Dana, mentioned it earlier, that that really enables them to continue to keep their focus on the applications because that's their bread and butter, that's how they differentiate.

By partnering, they're able to not worry about the keeping the lights on and instead focus on the application. Second, of course, is they're a global organization and so they have global delivery needs based on data sovereignty regulations. And third, and I'd say probably most important, is they selected a partner that was able to bring to bear the expertise and skillsets that are difficult for enterprises to recruit and retain. As a result, they were able to take advantage of the different infrastructure models that we're delivering for them to support their business.

Gardner: We're now going to go to our question and answer portion. Kristen Allen of Navisite is moderating our Q and A section.

Bare metal and beyond

Kristen Allen: We have some very interesting questions. The first one ties into a conversation you were just having, "What are the ROI benefits to moving to bare metal servers for certain workloads?"

Grimes: Not all software licensing is yet virtualization-friendly, or at least on a virtualization platform-agnostic platform, and so there's really two things that play into the selection of bare metal, at least in my experience. There is kind of a model of bare metal computing, small cartridge-based computers, that are very specific to certain workloads. But when we talk in more general terms for a typical enterprise workload, it really revolves around either software licensing incompatibility with some of the cloud deployment models or a belief that there is a performance that requires bare metal, though in practice I think that's more of optics than reality. But those are the two things that typically drive bare metal adoption in my experience.

Linthicum: Ultimately, people want access directly for at the end-of-the-line platforms, and if there's some performance reason, or some security reason, or some kind of a direct access to some of the input-output systems, we do see these kinds of one-offs for bare metal. I call them special needs applications. I don't see it as something that's going to be widely adopted, but from time to time, it's needed, and the capabilities are there depending on where you want to run it.

Allen: Our next question is, "Should there be different thinking for data workloads versus apps ones, and how should they be best integrated in a hybrid environment?"
The compute aspect and data aspect of an application should be decoupled. If you want to you can then assemble them on different platforms, even one on public cloud and one on private cloud.

Linthicum: Ultimately, the compute aspect of an application and the data aspect of that application really should be decoupled. Then, if you want to, you can assemble them on different platforms. I would typically think that we're going to place them either on all public or all private, but you can certainly do one on private and one on public, and one on public and one on private, and link them that way.

As we're migrating forward, the workloads are getting even more complex. And there's some application workloads that I've seen, that I've developed, where the database would be partitioned against the private cloud and the public cloud for disaster recovery (DR) purposes or performance purposes, and things like that. So, it's really up to you as the architect as to where you're going to place the data in adjacent relation to the workload. Typically, a good idea to place them as close to each other as they can so they have the highest bandwidth to communicate to each other. However, it's not necessary depending on what the application's doing.

Gardner: David, maybe organizations need to place their data in a certain jurisdiction but might want to run their apps out of a data center somewhere else for performance and economics?

Grimes: The data sovereignty requirement is something that we touched on and that's becoming increasingly important and increasingly, that's a driver too, in deciding where to place the data.

Just following on Dave's comments, I agree 100 percent. If you have the opportunity to architect a new application, I think there's some really interesting choices that can be made around data placement, network placement, and decoupling them is absolutely the right strategy.

I think the challenge many organizations face is having that mandate to close down the enterprise data center and move to the "cloud." Of course, we know that “cloud” means a lot of different things but, do that in a legacy application environment and that will present some unique challenges as well, in terms of actually being able to sufficiently decouple data and applications.

Curious, Dave, if you've had any successes in kind of meeting that challenge?

Linthicum: Yes. It depends on the application workload and how flexible the applications are and how the information is communicating between the systems; also security requirements. So, it's one of those obnoxious consulting responses, “it depends” as to whether or not we can make that work. But the thing is the architecture is a legitimate architectural pattern that I've seen before and we've used it.

Allen: Okay. How do you meet and adapt for Health Insurance Portability and Accountability Act of 1996
(HIPAA) requirements and still maintain stable connectivity for the small business?

Grimes: HIPAA, like many of the governance programs, is a very large and co-owned responsibility. I think from our perspective at Navisite, part of Spectrum Enterprise, we have the unique capability of delivering both the network services and the cloud services in an integrated way that can address the particular question around stable connectivity. But ultimately, HIPAA is a blended responsibility model where the infrastructure provider, the network provider, the provider managing up to whatever layer of the application stack will have certain obligations. But then the partner, the client would also retain some obligations as well.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Sponsor: Navisite.

You may also be interested in: