Tuesday, October 29, 2019

How Unisys and Dell EMC head off backup storage cyber security vulnerabilities

https://www.unisys.com/offerings/security-solutions

 The next BriefingsDirect data security insights discussion explores how data -- from one end of its life cycle to the other -- needs new protection and a means for rapid recovery. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.


Stay with us as we examine how backup storage especially needs to be made safe and secure if companies want to quickly right themselves from an attack. To learn more, please welcome Andrew Peters, Stealth Industry Director at Unisys, and George Pradel, Senior Systems Engineer at Dell EMC. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s changed in how data is being targeted by cyber attacks? How are things different from three years ago?

Peters
Peters: Well, one major thing that’s changed in the recent past has been the fact that the bad guys have found out how to monetize and extort money from organizations to meet their own ends. This has been something that has caught a lot of companies flatfooted -- the sophistication of the attacks and the ability to extort money out of organizations.

Gardner: George, why does all data -- from one end of its life cycle to the other --now need to be reexamined for protection?

Pradel: Well, Andrew brings up some really good points. One of the things we have seen out in the industry is ransomware-as-a-service. Folks can just dial that in. There are service level agreements (SLAs) on it. So everyone’s data now is at risk.

Another of the things that we have seen with some of these attacks is that these people are getting a lot smarter. As soon as they go in to try and attack a customer, where do they go first? They go for the backups. They want to get rid of those, because that’s kind of like the 3D chess where you are playing one step ahead. So things have changed quite a bit, Dana.

Peters: Yes, it’s really difficult to put the squeeze on an organization knowing that they can recover themselves with their backup data. So, the heat is on the bad guys to go after the backup systems and pollute that with their malware, just to keep companies from having the capability to recover themselves.

Gardner: And that wasn’t the case a few years ago?

Pradel
Pradel: The attacks were so much different a few years ago. They were what we call script kiddie attacks, where you basically get some malware or maybe you do a denial-of-service attack. But now these are programmatized, and the big thing about that is if you are a target once, chances are really good that the thieves are just going to keep coming back to you, because it’s easy money, as Andrew pointed out.

Gardner: How has the data storage topology changed? Are organizations backing up differently than they did a few years ago as well? We have more cloud use, we have hybrid, and different strategies for managing de-dupe and other redundancies. How has the storage topology and landscape changed in a way that affects this equation of being secure end to end?

The evolution of backup plans 

Pradel: Looking at how things have changed over the years, we started out with legacy systems, the physical systems that many of us grew up with. Then virtualization came into play, and so we had to change our backups. And virtualization offered up some great ways to do image-level backups and such.

Now, the big deal is cloud. Whether it’s one of the public cloud vendors, or a private cloud, how do we protect that data? Where is our data residing? Privacy and security are now part of the discussion when creating a hybrid cloud. This creates a lot of extra confusion -- and confusion is what thieves zone in on.

We want to make sure that no matter where that data resides that we are making sure it’s protected. We want to provide a pathway for bringing back the data that is air gapped or via one of our other technologies that helps keeps the data in a place that allows for recoverability. Recoverability is the number one thing here, but it definitely has changed in these last few years.

Gardner: Andrew, what do you recommend to customers who may have thought that they had this problem solved? They had their storage, their backups, they protected themselves from the previous generations of security risk. When do you need to reevaluate whether you are secure enough?

Stay prepared 

Peters: There are a few things to take into consideration. One, they should have an operation that can recover their data and bring their business back up and running. You could get hit with an attack that turns into a smoking hole in the middle of your data center. So how do you bring your organization back from that without having policies, guidance, a process and actual people in place in the systems to get back to work?

Learn More About Cyber Recovery
With Unisys Stealth
Another thing to consider is the efficacy of the data. Is it clean? If you are backing up data that is already polluted with malware, guess what happens when you bring it back out and you recover your systems? It rehydrates itself within your systems and you still have the same problem you had before. That’s where the bad guys are paying attention. That’s what they want to have happen in an organization. It’s a hand they can play.

If the malware can still come out of the backup systems and rehydrate itself and re-pollute the systems when an organization is going through its recovery, it’s not only going to hamper the business and the time to recovery, and cost them, it’s also going to force them to pay the ransoms that the bad guys are extorting.

Gardner: And to be clear, this is the case across both the public and the private sector. We are hearing about ransomware attacks in lots of cities and towns. This is an equal opportunity risk, isn’t it?

Peters: Malware and bad guys don’t discriminate.

Pradel: You are exactly right about that. One of the customers that I have worked with recently in a large city got hit with a ransomware attack. Now, one of the things about ransomware attacks is that they typically want you to pay in bitcoin. Well, who has $100,000 worth of bitcoin sitting around?
If you have a government attacked, one of the problems is that chaos ensues. Police officers in their cars were not able to pull up license plates on the computer to check on cars they were pulling over.

But let’s take a look at why it’s so important to eliminate these types of attacks. If you have a government attacked, one of the problems is that chaos ensues. In one particular situation, police officers in their cars were not able to pull up license plates on the computer to check on cars they were pulling over, to see if they had a couple of bad tickets or perhaps the person was wanted for some reason. And so it is a very dangerous situation you may put into play for all of these officers.

That’s one tiny example of how these things can proliferate. And like you said, whether it’s public sector or private sector, if you are a soft target, chances are at some point you are going to get hit with ransomware.

Secure the perimeter and beyond 

Gardner: What are we doing differently in terms of the solutions to head this off, especially to get people back and up and running and to make sure that they have clean and useable data when they do so?

Peters: A lot of security had been predicated on the concept of a perimeter, something where we can put up guards, gates, and guns and in a moat. There is an inside and an outside, and it’s generally recognized today that that doesn’t really exist.

And so, one of the new moves in security is to defend the endpoint, the application, and to do that using a technology called micro-segmentation. It’s becoming more popular because it allows us to have a security perimeter and a policy around each endpoint. And if it’s done correctly, you can scale to hundreds to thousands to hundreds of thousands, and potentially millions of endpoint devices, applications, servers and virtually anything you have in an environment.

https://www.dellemc.com/en-us/data-protection/cyber-recovery-solution.htm#scroll=off

And so that’s one big change: Let’s secure the endpoint, the application, the storage, and each one comes with its own distinct security policy.

Gardner: George, how do you see the solutions changing, perhaps more toward the holistic infrastructure side and not just the endpoint issues?

Pradel: One of the tenets that Andrew related to is called security by obscurity. The basic tenet is, if you can’t see it’s much safer. Think about a safe in your house. If the safe is back behind the bookcase and you are the only person that knows it’s there, that’s an extra level of security. Well, we can do that with technology.

So you are seeing a lot of technologies being employed. Many of them are not new types of security technologies. We are going back to what’s worked in the past and building some of these new technologies on that. For example, we add on automation, and with that automation we can do a lot of these things without as much user intervention, and so that’s a big part of this.

Incidentally, if any type of security that you are using has too much user intervention, then it’s very hard for the company to cost-justify those types of resources.

Gardner: Something that isn’t different from the past is having that Swiss Army knife approach of multiple layers of security. You use different tools, looking at this as a team sport where you want to bring as many solutions as possible to bear on the problem.

How have Unisys and Dell EMC brought different strengths together to create a whole greater than the sum of the parts?

Hide the data, so hackers can’t seek

Peters: One thing that’s fantastic that Dell has done is that they have put together a Cyber Recovery solution so when there is a meltdown you have gold copies of critical data required to reestablish the business and bring it back up and get into operation. They developed this to be automated, to contain immutable copies of data, and to assure the efficacy of the data in there.

Now, they have set this stuff up with air gapping, so it is virtually isolated from any other network operations. The bad guys hovering around in the network have a terrible time of trying to even touch this thing.
Learn More About Dell EMC PowerProtect
Cyber Recovery Solution
Unisys put what we call a cryptographic wrapper around that using our micro-segmentation technology called Stealth. This creates a cryptographic air gap that virtually disappears that vault and its recovery operations from anything else in the network, if they don’t have a cryptographic key. If they have a cryptographic key that was authorized, they could talk to it. If they don’t, they can’t. So any bad guys and malware can’t see it. If they can’t see, they can’t touch, and they can’t hack. This then turns into an extraordinarily secure means to recover an organization’s operations.

Gardner: The economics of this is critical. How does your technology combination take the economic incentive away from these nefarious players?


Pradel: Number one, you have a way to be able to recover from this. All of a sudden the bad guys are saying, “Oh, shoot, we are not going to get any money out of these guys.”

You are not going to be a constant target. They are going to go after your backups. Unisys Stealth can hide the targets that these people go after. Once you have this type of a Cyber Recovery solution in place, you can rest a lot easier at night.

As part of the Cyber Recovery solution, we actually expect malware to get into the Cyber Recovery vault. And people shake their head and they go, “Wait, George, what do you mean by that?”

Yes, we want to get malware into the Cyber Recovery vault. Then we have ways to do analytics to see whether our point-in times are good. That way, when we are doing that restore, as Andrew talked about earlier, we are restoring a nice, clean environment back to the production environment.

Recovery requires commitment, investment 

So, these types of solutions are an extra expense, but you have to weigh the risks for your organization and factor what it really costs if you have a cyber recovery incident.

Additionally, some people may not be totally versed on the difference between a disaster recovery situation and a cyber recovery situation. A disaster recovery may be from some sort of a physical problem, maybe a tornado hits and wipes out a facility or whatever. With cyber recovery, we are talking about files that have been encrypted. The only way to get that data back -- and get back up and running -- is by employing some sort of a cyber recovery solution, such as the Unisys and Dell EMC solution.

Gardner: Is this tag team solution between Unisys and Dell EMC appropriate and applicable to all kinds of business, including cloud providers or managed service providers?

Peters: It’s really difficult to measure the return on investment (ROI) in security, and it always has been. We have a tool that we can use to measure risk, probability, and financial exposure for an organization. You can actually use the same methodologies that insurance companies use to underwrite for things like cybersecurity and virtually anything else. It’s based on the reality that there is a strong likelihood that there is going to be a security breach. There is going to be perhaps a disastrous security breach, and it’s going to really hurt the organization.
Plan on the fact that it's probably going to happen. You need to invest in your systems and your recovery. If you think you can sustain a complete meltdown on your company and go out of operations for weeks to months, then you probably don't need to put money into it.

Plan on the fact that it’s probably going to happen. You need to invest in your systems and your recovery. If you think that you can sustain a complete meltdown on your company and go out of operation for weeks to months, then you probably don’t need to put money into it.

If you understand how exposed that you potentially are, and the fact that the bad guys are staring at the low hanging fruit -- which may be state governments, or cities, or other things that are less protected.

The fact is, the bad guys are extraordinarily patient. If your payoff is in the tens of millions of dollars, you might spend, as the bad guys did with Sony, years mapping systems, learning how an operation works, and understanding their complete operations before you actually take action, and in potentially the most disastrous way possible.

So ergo, it’s hard to put a number on that. An organization will have to decide how much they have to lose, how much they have at risk, and what the probability is that they are actually going to get hit with an attack.

Gardner: George, also important on this applicability as to where it’s the right fit is that automation and skills. What sort of organizations typically will go at this and what skills are required?

Automate and simplify 

Pradel: That’s been the basis for our Cyber Recovery solution. We have written a number of APIs to be able to automate different pieces of a recovery situation. If you have a cyber recovery incident, it’s not a matter of just, “Okay, I have the data, now I can restore it.” We have a lot of experts in the field. What they do is figure out exactly where the attack came from, how it came in, what was affected, and those types of things.

We make it as simple as possible for the administration. We have done a lot of work creating APIs that automate items such as recovering backup servers. We take point-in-time copies of the data. I don’t want to go into it too deeply, but our data domain technology is the basis for this. And the reason why it’s important to note is because the replication we do is based upon our variable-length deduplication.

Now, that may sound a little gobbledygook, but what that means is that we have the smallest replication times that you could have for a certain amount of data. So when we are taking data into the Cyber Recovery vault, we are reducing what’s called our dwell time. This is the area where you would have someone that could see that you had a connection open.
Learn More About Cyber Recovery
With Unisys Stealth
But a big part of this is on a day-to-day basis, I don’t have to be concerned. I don’t have a whole team of people that are maintaining this Cyber Recovery vault. Typically, with our customers, they already have the understanding of how our base technology works and so that part is very straightforward. And what we have is automation, we have policies that are set up in the Cyber Recovery vault that will, on a regular basis, hold the data, whatever is changed from the production environment, typically once a day.

And a rule of thumb for some people that might be thinking, this sounds really interesting, but how much data would I put in this? Typically we’ll do 10 to 15 percent of a customer’s production environment, that might go into the Cyber Recovery vault. So we want to make this as simple as possible, we want to automate as much as possible.

And on the other side, when there is an incident, we want to be able to also automate that part because that is when all heck is going on. If you’ve ever been involved in one of those situations, it’s not always your clearest thinking moment. So automation is your best friend and can help you get back up and running as quickly as possible.

Gardner: George, run us through an example, if you would, of how this works in the real-world.

One step at a time for complete recovery 

Pradel: What will happen is that at some point somebody clicks on that doggone attachment that was on that e-mail that had a free trip to Hawaii or something and it had a link to some ransomware.

Once the security folks have determined that there has been an attack, sometimes it’s very obvious. There is one attack where there is a giant security skeleton that comes up on your screen and basically says, “Got you.” It then gives instructions on how you would go about sending them the money so that you can get your data back.

https://www.dellemc.com/en-us/data-protection/cyber-recovery-solution.htm#scroll=off

However, sometimes it’s not quite so obvious. Let’s say your security folks have determined there has been attack and then the first thing that you would want to do is access the cyber recovery provided by putting the Cyber Recovery vault with Stealth. You would go to the Cyber Recovery vault and lock down the vault, and it’s simple and straightforward. We talked about this a little earlier about the way we do the automation is you click on the lock, that locks everything down and it stops any future replications from coming in.

And while the security team is looking to find out how bad it is, what was affected, one of the things the cyber recovery team does is to go in and run some analysis, if you haven’t done so already. You can automate this type of analysis, but let’s say you haven’t done that. Let’s say you have 30 point-in times, so one for each day throughout the last month. You might want to check and run an analysis against maybe the last five of those to be able to see whether or not those come up as suspicious or as okay.

The way that’s done is to look at the entropy of the different point-in-time backups. One thing to note is that you do not have to rehydrate the backup in order to analyze it. So let’s say you backed it up with Avamar and then you wanted to analyze that backup. You don’t have to rehydrate that in the vault in order to get it back up and running.

https://www.unisys.com/offerings/security-solutions
Once that’s done, then there’s a lot of different ways that you can decide what to do. If you have physical machines but they are not in great shape, they are suspect in that. But, if the physical parts of it are okay, you could then decide that at some point you’re going to reload those machines with the gold copies or very typical to have in the vault and then put the data and such on it.

If you have image-level backups that are in the vault, those are very easy to get back up and running on a VMWare ESX host store, or Microsoft Hyper-V host that you have in your production environment. So, there are a lot of different ways that you can do that.

The whole idea, though, is that our typical Cyber Recovery solution is air-gapped and we recommend customers have a whole separate set of physical controls as well as the software controls.

Now, one of those steps may not be practical in all situations. That’s why we looked at Unisys Stealth, to provide a virtual air gap by installing the pieces from Stealth.

Remove human error 

Peters: One of the things I learned in working with the United States Air Force’s Information Warfare Center was the fact that you can build the most incredibly secure operation in the world and humans will do things to change it.

With Stealth, we allow organizations to be able to get access into the vault from a management perspective to do analytics, and also from a recovery perspective, because anytime there’s a change to the way that vault operates, that’s an opportunity for bad guys to find a way in. Because, once again, they’re targeting these systems. They know they’re there; they could be watching them and they can be spending years doing this and watching the operations.

Unisys Stealth removes the opportunity for human error. We remove the visibility that any bad guys, or any malware, would have inside a network to observe a vault. They may see data flowing but they don’t know what it’s going to, they don’t know what it’s for, they can’t read it because it’s going to be encrypted. They are not going to be able to even see the endpoints because they will never be able to get an address on them. We are cryptographically disappearing or hiding or cloaking, whatever word you’d like to use -- we are actively removing those from visibility from anything else on the network unless it’s specifically authorized.

Gardner: Let’s look to the future. As we pointed out earlier in our discussion, there is a sort of a spy versus spy, dog chasing the cat, whatever you want to use as a metaphor, one side of the battle is adjusting constantly and the other is reacting to that. So, as we move to the future, are there any other machine learning (ML)-enabled analytics on these attacks to help prevent them? How will we be able to always stay one step ahead of the threat?

https://www.dellemc.com/en-us/data-protection/cyber-recovery-solution.htm#scroll=off
Peters: With our technology we already embody ML. We can do responses called dynamic isolation. A device could be misbehaving and we could change its policy and be able to either restrict what it’s able to communicate with or cut it off altogether until it’s been examined and determined to be safe for the environment.

We can provide a lot of automation, a lot of visibility, and machine-speed reaction in response to threats as they are happening. Malware doesn’t have to get that 20-second head start. We might be able to cut off in 10 seconds and be able to make it a dynamic change to the threat surface.

Gardner: George, what’s in the future that it’s going to allow you to stay always one step ahead of the bad guys? Also, is there is an advantage for organizations doing a lot of desktops-as-a-service (DaaS) or virtual desktops? Do they have an advantage in having that datacenter image of all of the clients?

Think like a bad guy 

Pradel: Oh, yes, definitely. How do we stay in front of the bad guys? You have to think like the bad guys. And so, one of the things that you want to do is reduce your attack surface. That’s a big part of it, and that’s why the technology that we use to analyze the backups, looking for malware, uses 100 different types of objects of entropy.

As we’re doing ML of that data, of what’s normal what’s not normal, we can figure out exactly where the issues are to stay ahead of them.

Now an air gap on its own surface is extremely secure because it keeps that data in an environment where no one can get at it. We have situations where Unisys Stealth helped with closing the air gap situation where a particular general might have three different networks that they need to connect to and Stealth is a fantastic solution for that.

If you’re doing DaaS, there are ways that it can help. We’re always looking at where the data resides, and most of the time in those situations the data is going to reside back at the corporate infrastructure. That’s a very easy place to be able to protect data. When the data is out on laptops and things like that, then it makes it a little bit more difficult, not impossible, but you have a lot of different end points that you’re pulling from. To be able to bring the system back up -- if you’re using virtual desktops, that kind of thing, actually it’s pretty straightforward to be able to do that because that environment, chances are they’re not going to bring down the virtual desktop environment, they’re going to encrypt the data.
Learn More About Dell EMC PowerProtect
Cyber Recovery Solution
Now, that said, one of the things when we’re having these conversations, it’s not as straightforward of a conversation as ever. We talk about how long you might be out of business depending upon what you’ve implemented. We have to engineer for all the different types of malware attacks. And what’s the common denominator? It’s the data and keeping that data safe, keeping that data so it can’t be deleted.


We have a retention lock capability so you can lock that up for as many as 70 years and it takes two administrators to unlock it. That’s the kind of thing that makes it robust.

In the old days, we would do a WORM drive and copy stuff off to a CD to make something immutable. This is a great way to do it. And that’s one way to stay ahead of the bad guys as best as we can.

Wednesday, October 23, 2019

How Unisys and Microsoft team up to ease complex cloud adoption for governments and enterprises

https://www.unisys.com/offerings/cloud-and-infrastructure-services

The path to cloud computing adoption persistently appears complex and risky to both government and enterprise IT leaders, recent surveys show.

This next BriefingsDirect managed cloud methodologies discussion explores how tackling complexity and security requirements upfront helps ease the adoption of cloud architectures. By combining managed services, security solutions, and hybrid cloud standardization, both public and private sector organizations are now making the cloud journey a steppingstone to larger business transformation success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We'll now explore how cloud-native apps and services modernization benefit from prebuilt solutions with embedded best practices and automation. To learn how, we welcome Raj Raman, Chief Technology Officer (CTO) for Cloud at Unisys, and Jerry Rhoads, Cloud Solutions Architect at Microsoft. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.


Here are some excerpts:

Gardner: Raj, why are we still managing cloud adoption expectations around complexity and security? Why has it taken so long to make the path to cloud more smooth -- and even routine?

Raman: Well, Dana, I spend quite a bit of time with our customers. A common theme we see -- be it a government agency or a commercial customer – is that many of them are driven by organizational mandates and getting those organizational mandates in place often proves challenging, more so than what one thinks.

Cloud adoption challenges 

The other part is that while Amazon Web Services (AWS) or Microsoft Azure may be very easy to get on to, the question then becomes how do you scale up? They have to either figure out how to develop in-house capabilities or they look to a partner like Unisys to help them out.

Raman
Cloud security adoption continues to be a challenge because enterprises still try and wish to apply traditional security practices to the cloud. Having a security and risk posture on AWS or Azure means having a good understanding of the shared security model across user level, application and infrastructure layers of the cloud.

And last, but not least, a very clear mandate such as digital transformation or a specific initiative, where there is a core sponsor around it, oftentimes does ease the whole focus on some of these.

These are some of the reasons we see for cloud complexity. The applications transformation can also be quite arduous for many of our clients.

Gardner: Jerry, what are you seeing for helping organizations get cloud-ready? What best practices make for a smoother on-ramp?

Rhoads: One of the best practices beforehand is to determine what your endgame is going to look like. What is your overall cloud strategy going to look like?

Rhoads
Instead of just lifting and shifting a workload, what is the life cycle of that workload going to look like? It means a lot of in-depth planning -- whether it's a government agency or private enterprise. Once we get into the mandates, it's about, “Okay, I need this application that’s running in my on-premises data center to run in the cloud. How do I make it happen? Do I lift it and shift it or do I re-architect it? If so, how do I re-architect for the cloud?”

That’s a big common theme I’m seeing: “How do I re-architect my application to take better advantage of the cloud?”

Gardner: One of the things I have seen is that a lot of organizations do well with their proof of concepts (POCs). They might have multiple POCs in different parts of the organization. But then, getting to standardized comprehensive cloud adoption is a different beast.

Raj, how do you make that leap from spotty cloud adoption, if you will, to more holistic?

One size doesn’t fit all

Raman: We advise customers to try and [avoid] taking it on as a one-size-fits-all. For example, we have one client who is trying – all at once – to lift and shift thousands of applications.

Now, they did a very detailed POC and they got yield from that POC. But when it came to the actual migration and transformation, they were convinced and felt confident that they could take it on and try it en masse, with thousands of applications.

The thing is, in trying to do that, not all applications are one size. One needs a phased approach for doing application discovery and application assessment. Then, based on that, you can determine which applications are well worth the effort [to move to a cloud].

So we recommend to customers that they think of migrations as a phased approach. Be very clear in terms of what you want to accomplish. Start small, gain the confidence, and then have a milestone-based approach of accomplishing it all.
Learn More About 
Unisys CloudForte
Gardner: These mandates are nonetheless coming down from above. For the US federal government, for example, cloud has become increasingly important. We are expecting somewhere in the vicinity of $3.3 billion to be spent for federal cloud in 2021. Upward of 60 percent of federal IT executives are looking to modernization. They have both the cloud and security issues to face. Private sector companies are also seeing mandates to rapidly become cloud-native and cloud-first.

Jerry, when you have that pressure on an IT organization -- but you have to manage the complexity of many different kinds of apps and platforms -- what do you look for from an alliance partner like Unisys to help make those mandates come to fruition?

Rhoads: In working with partners such as Unisys, they know the customer. They are there on the ground with the customer. They know the applications. They hear the customers. They understand the mandates. We also understand the mandates and we have the cloud technology within Azure. Unisys, however, understands how to take our technology and integrate it in with their end customer’s mission.

Gardner: And security is not something you can just bolt on, or think of, after the fact in such migrations. Raj, are we seeing organizations trying to both tackle cloud adoption and improve their security? How do Unisys and Microsoft come together to accomplish those both as a tag team rather than a sequence, or even worse, a failure?

Secure your security strategy

Raman: We recently conducted a survey of our stakeholders, including some of our customers. And to no surprise security -- be it as part of the migrations or in scaling up their current cloud initiatives – is by far a top area of focus and concern.

We are already partnering with Microsoft and others with our flagship security offering, Unisys Stealth. We are not just in collaboration but leapfrogging in terms of innovation. The Azure cloud team has released a specific API to make products like Stealth available. This now gives customers more choice and it allows Unisys to help meet customers in terms of where they are.

Also, earlier this year we worked very closely with the Microsoft cloud team to release Unisys CloudForte for Azure. These are foundational elements that help both governments as well as commercial customers leverage Azure as a platform for doing their digital transformation.
The Microsoft team has also stepped up and worked very closely with the Unisys team developers and architects to make these services native on Azure.

The Microsoft team has also stepped up and worked very closely with the Unisys team developers and architects to make these services native on Azure, as well as help customers understand how they can better consume Azure services.

Those are very specific examples in which we see the Unisys collaboration with Microsoft scaling really well.

Gardner: Jerry, it is, of course, about more than just the technology. These are, after all, business services. So whether a public or private organization is making the change to an operations model -- paying as you consume and budgeting differently -- financially you need to measure and manage cloud services differently.

How is that working out? Why is this a team sport when it comes to adopting cloud services as well as changing the culture of how cloud-based business services are consumed?

Keep pay-as-you-go under control 

Rhoads: One of the biggest challenges I hear from our customers is around going from a CAPEX model to an OPEX model. They don’t really understand how it works.

CAPEX is a longtime standard -- here is the price and here is how long it is good for until you have to then re-up and buy new piece of hardware or re-up the license, or whatnot. Using cloud, it’s pay-as-you-go.

If I launch 400 servers for an hour, I’m paying for 400 virtual machines running for one hour. So if we don’t have a governance strategy in place to stop something like that, we can wind up going through one year's worth of budget in 30 days -- if it's not governed, if it's not watched.

And that's why, for instance, working with Unisys CloudForte there are built-in controls to where you can go through and ping the Azure cloud backend -- such as Azure Cost Management or our Cloudyn product -- where you can see how much your current charges are, as well as forecast what those charges are going to look like. Then you can get ahead of the eight ball, if you will, to make sure that you are actually burning through your budget correctly -- versus getting a surprise at the end of the month.

Gardner: Raj, how should organizations better manage that cultural shift around cloud consumption governance?

Raman: Adding to Jerry’s point, we see three dimensions to help customers. One is what Unisys calls setting up a clear cloud architecture, the foundations. We actually have an offering geared around this. And, again, we are collaborating with Microsoft on how to codify those best practices.

In going to the cloud, we see five pillars that customers have to contend with: cost, security, performance, availability, and operations. Each of these can be quite complex and very deep.

https://www.unisys.com/offerings/cloud-and-infrastructure-services

Rather than have customers figure these out themselves, we have combined product and framework. We have codified it, saying, “Here are the top 10 best practices you need to be aware of in terms of cost, security, performance, availability, and operations.”

It makes it very easy for the Unisys consultants, architects, and customers to understand at any given point -- be it pre-migration or post-migration -- that they have clear visibility on where they stand for their review on cost in the cloud.

We are also thinking about security and compliance upfront -- not as an afterthought. Oftentimes customers go deep into the journey and they realize they may not have the controls and the security postures, and the next thing you know they start to lose confidence.

So rather than wait for that, the thinking is we arm them early. We give them the governance and the policies on all things security and compliance. And Azure has very good capabilities toward this.


The third bit, and Jerry touched on this, is overall financial governance. The ability to think about -- not just cost as a matter of spinning a few Azure resources up and down – but in a holistic way, in a governance model. That way you can break it up in terms of analyzed or utilized resources. You can do chargebacks and governance and gain the ability to optimize cost on an ongoing basis.

These are distinctive foundational elements that we are trying to arm customers with and make them a lot more comfortable and gain the trust as well as the process with cloud adoption.

Gardner: The good news about cloud offerings like Azure and hybrid cloud offerings like Azure Stack is you gain a standardized approach. Not necessarily one-size-fits-all, but an important methodological and technical consistency. Yet organizations are often coming from a unique legacy, with years and years of everything from mainframes to n-tier architectures, and applications that come and go.

How do Unisys and Microsoft work together to make the best of standardization for cloud, but also recognize specific needs that each organization has?

Different clouds, same experience

Rhoads: We have Azure Stack for on-premise Azure deployments. We also have Azure Commercial Cloud as well as Azure Government Cloud and Department of Defense (DoD) Cloud. The good news is that they use the same portal, same APIs, same tooling, and same products and services across all three clouds.

Now, as services roll out, they roll out in our Commercial Cloud first, and then we will roll them out into Azure Government as well as into Azure Stack. But, again, the good news is these products are available, and you don’t have to do any special configuration or anything in the backend to make it work. It’s the same experience regardless of which product the customer wants to use.
Unisys CloudForte works with Azure Stack, with Commercial, and Azure for Government. For the end customers it's the same cloud services that they expect to use. The difference is just where those cloud services live.

What’s more, Unisys CloudForte works with Azure Stack, with Commercial, and Azure for Government. For the end customer it's the same cloud services that they expect to use. The difference really is just where those cloud services live, whether it's with Azure Stack on-premises, on a cruise ship or in a mine, or if you are going with Azure Commercial Cloud, or if you need a regulated workload such as a FedRAMP high workload or an IC4, IC5 workload, then you would go into Azure Government. But there are no different skills required to use any of those clouds.

Same skill set. You don’t have to do any training, it’s the same products and services. And if the products and services aren't in that region, you can work with Unisys or myself to engage the product teams to put those products in Azure Stack or in Azure for Government.

Gardner: How does Unisys CloudForte managed services complement these multiple Azure cloud environments and deployment models?

Rhoads: CloudForte really further standardizes it. There are different levels of CloudForte, for instance, and the underlying cloud really doesn’t matter, it’s going to be the same experience to roll that out. But more importantly, CloudForte is really an on-ramp. A lot of times I am working with customers and they are like, “Well, gee, how do I get started?”

Whether it’s setting up that subscription in-tenant, getting them on-board with that, as well as how to roll out that POC, how do they do that, and that’s where we leverage Unisys and CloudForte as the on-ramp to roll out that first POC. And that’s whether that POC is a bare-bones Azure virtual network or if they are looking to roll out a complete soup-to-nuts application with application services wrapped around it. CloudForte and Unisys can provide that functionality.

Do it your way, with support 

Raman: Unisys CloudForte has been designed as an offering on top of Azure. There are two key themes. One, meet customers where they are. It's not about what Unisys is trying to do or what Azure is trying to do. It's about, first and foremost, being customer obsessed. We want to help customers do things on their terms and do it the right way.

So CloudForte has been designed to meet those twin objectives. The way we go about doing it is -- imagine, if you will, a flywheel. The flywheel has four parts. One, the whole consumption part, which is the ability to consume Azure workloads at any given point.
Learn More About 
Unisys CloudForte
Next is the ability to run commands, or the operations piece. Then you follow that up with the ability to accelerate transformations, so data migrations or app modernization.

Last, but not least, is to transform the business itself, be it on a new technology, artificial intelligence (AI), machine learning (ML), blockchain, or anything that can wrap on top of Azure cloud services.

The beauty of the model is a customer does not have to buy all of these en masse; they could be fitting into any of this. Some customers come and say, “Hey, we just want to consume the cloud workloads, we really don’t want to do the whole transformation piece.” Or some customers say, “Thank you very much, we already have the basic consumption model outlined. But can you help us accelerate and transform?”

So the ability to provide flexibility on top of Azure helps us to meet customers where they are. That’s the way CloudForte has been envisioned, and a key part of why we are so passionate and bullish in working with Microsoft to help customers meet their goals.

Gardner: We have talked about technology, we have talked about process, but of course people and human capital and resources of talent and skills are super important as well. So Raj, what does the alliance between Unisys and Microsoft do to help usher people from being in traditional IT to be more cloud-native practitioners? What are we doing about the people element here?

Expert assistance available

Raman: In order to be successful, one of the big focus areas with Unisys is to arm and equip our own people, be it at the consulting level, a sales-facing level, either doing cloud architectures or even doing cloud delivery, across the stripe, rank and file. There is an absolute mandate to increase the number of certifications, especially the Azure certifications.

In fact, I can also share that Unisys, as we speak, every month we have a doubling of people who are across the rank of Azure 300 and the 900. These are the two popular certifications, the whole Azure stack of it. We have now north of 300 trained people, and maybe my number is at the lower end. We expect the number to double.

https://www.unisys.com/offerings/cloud-and-infrastructure-services
So we have absolute commitment, because customers look to us to not only come in and solve the problems, but to do it with the level of expertise that we claim. So that’s why our commitment to getting our people trained and certified on Azure is a very important piece of it.

Gardner: One of the things I love to do is to not just tell, but to show. Do we have examples of where the Unisys and Microsoft alliance -- your approach and methodologies to cloud adoption, tackling the complexity, addressing the security, and looking at both the unique aspect of each enterprise and their skills or people issues -- comes all together? Do you have some examples?

Raman: The California State University is a longstanding customer of ours, a good example where they have transformed their own university infrastructure using Unisys CloudForte with a specific focus on all things hybrid cloud. We are pleased to see that not only is the customer happy but they are quite eager to get back to us in terms of making sure that their mandates are met on a consistent basis.

Our federal agencies are usually reluctant to be in the spotlight. That said, what I can share are representative examples. We have some very large defense establishments working with us. We have some state agencies close to the Washington, DC area, agencies responsible for the roll-out of cloud consumption across the mandates.

We are well on our way in not only working with the Microsoft Azure cloud teams, but also with state agencies. Each of these agencies is citywide or region-wide, and within that they have a health agency or an agency focused on education or social services.

In our experience, we are seeing an absolute interest in adopting the public clouds for them to achieve their citizens’ mandates. So those are some very specific examples.

Gardner: Jerry, when we look to both public and private sector organizations, how do you know when you are doing cloud adoption right? Are there certain things you should look to, that you should measure? Obviously you would want to see that your budgets are moving from traditional IT spending to cloud consumption. But what are the metrics that you look to?

The measure of success 

Rhoads: One of the metrics that I look at is cost. You may do a lift and shift and maybe you are a little bullish when you start building out your environments. When you are doing cloud adoption right, you should see your costs start to go down.

https://azure.microsoft.com/en-us/
So your consumption will go up, but your costs will go down, and that’s because you are taking advantage of either platform as a service (PaaS) in the cloud, and being able to auto-scale out, or looking to move to say Kubernetes and start using things like Docker containers and shutting down those big virtual machines (VMs), and clusters of VMs, and then running your Kubernetes services on top of them.

When you see those costs go down and your services going up, that’s usually a good indicator that you are doing it right.

Gardner: Just as a quick aside, Jerry, we have also seen that Microsoft Azure is becoming very container- and Kubernetes-oriented, is that true?

Rhoads: Yes, it is. We actually have Brendan Burns, as a matter of fact, who was one of the co-creators of Kubernetes during his time at Google.

Gardner: Raj, how do you know when you are getting this right? What do you look to as chief metrics from Unisys's perspective when somebody has gone beyond proof of concept and they are really into a maturity model around cloud adoption?

Raman: One of the things we take very seriously is our mandate to customers to do cloud on your terms and do it right. And what we mean by that is something very specific, so I will break it in two.

One is from a customer-led metric perspective. We actually rank ourselves very seriously in terms of Net Promoter Score. We have one of the highest in the industry relative to the rest of our competitions. And that's something that's hard-earned, but we keep striving to raise the bar on how our customers talk to each other and how they feel about us.

The other part is the ability to retain customers, so retention. So those are two very specific customer-focused benchmarks.

Now, building upon some of the examples that Jerry was talking about, from a cloud metric perspective, besides cost and besides cost optimization, we also look at some very specific metrics, such as how many net-net workloads are there under management. What are some of the net new services that are being launched? We especially are quite curious to see if there is a focus in terms of Kubernetes or AI and ML adoption, are there any trends toward that?
We rank ourselves very seriously in terms of Net Promoter Score. We have one of the highest in the industry, but we keep striving to raise the bar on how our customers talk to each other and feel about us.

One of the very interesting ones that I will share, Dana, is that some of our customers are starting to come and ask us, “Can you help set up an Azure Cloud center of excellence within our organization?” So that oftentimes is a good indicator that the customer is looking to transform the business beyond the initial workload movement.

And last, but not the least, is training, and absolute commitment to getting their own organization to become more cloud-native.

Gardner: I will toss another one in, and I know it's hard to get organizations to talk about it, but fewer security breaches, fewer days or instances of downtime because of a ransomware attack. So it's hard to get people to talk about it if you can't always prove when you don’t get attacked, but certainly a better security posture as compared to two years, three years ago would be a high indicator on my map as to whether cloud is being successful for you.

All right, we are almost out of time, so let's look to the future. What comes next when we get to a maturity model, when organizations are comprehensive, standardized around cloud, have skills and culture oriented to the cloud regardless of their past history? We are also of course seeing more use of the data behind the cloud, in operations and using ML and AI to gain AIOps benefits.

Where can we look to even larger improvements when we employ and use all that data that’s now being generated within those cloud services?

Continuous cloud propels the future 

Raman: One of the things that’s very evident to us is, as customers start to come to us and use the cloud at significant scale, is it is very hard for any one organization. Even for Unisys, we see this, which is how do you get scaled up and keep up with the rate of change that the cloud platform vendors such as Azure are bringing to the table; all good innovations, but how do you keep on top of that?

So that’s where a focus on what we are calling as “AI-led operations” is becoming very important for us. It’s about the ability to go and look at the operational data and have these customers go from a reactive, from a hindsight-led model, to a more proactive and a foresight-driven model, which can then guide, not only their cloud operations, but also help them think about where they can now leverage this data and use that Azure infrastructure to then launch more innovation or new business mandates. That’s where the AIOps piece, the AI-led operations piece, of it kicks in.
Learn More About 
Unisys CloudForte
There is a reason why cloud is called continuous. You gain the ability to have continuous visibility via compliance or security, to have constant optimization, both in terms of best practices, reviewing the cloud workloads on a constant basis and making sure that their architectures are being assessed for the right way of doing Azure best practices.

And then last, but not the least, one other trend I would surface up, Dana, as a part of this, which is we are starting to see an increase in the level of conversational bots. Many customers are interested in getting to a self-service mode. That’s where we see conversational bots built on Azure or Cortana will become more mainstream.

Gardner: Jerry, how do organizations recognize that the more cloud adoption they have, the more standardization, the more benefits they will get in terms of AIOps and a virtuous adaption pattern kicks in?

Rhoads: To expand on what Raj talked about with AIOps, we actually have built in a lot of AI into our products and services. One of them is with Advanced Threat Protection (ATP) on Azure. Another one is with anti-phishing mechanisms that are deployed in Office 365.

So as more folks move into the cloud, we are seeing a lot of adoption around these products and services, as well as we are also able to bring in a lot of feedback and do a lot of learning off of some of the behaviors that we are seeing to make the products even better.

DevOps integrated in the cloud 

So one of things that I do, in working with my customers is DevOps, how do we employ DevOps in the cloud? So a lot of folks are doing DevOps on-premises and they are doing it from an application point of view. I am rolling out my application on an infrastructure that is either virtualized or physical, sitting in my data center, how do I do that in the cloud, why do I do that in the cloud?

Well, in the cloud everything is software, including infrastructure. Yes, it sits on a server at the end of the day; however, it is software-defined, being it is software-defined, it has an API, I can write code. So therefore if I want to blow out or roll out a suite of VMs or I want to roll out Kubernetes clusters and put my application on top of it, I can create definable, repeatable code, if you will, that I can check into a repository someplace, press the button, and roll out that infrastructure and put my application on top of it.


So now deploying applications, especially with DevOps in the cloud, it's not about I have an operations team and then I have my DevOps team that rolls out the application on top of existing infrastructure. Instead I actually bundle it altogether. I have tighter integration, which means I now have repeatable deployments and I can do my deployments, instead of doing them every quarter or annually, I can do deployments -- I can do 20, 30, 1,000 a day if I like, if I do it right.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Microsoft.

You may also be interested in: