Tuesday, August 15, 2017

DreamWorks Animation crafts its next era of dynamic IT infrastructure

The next BriefingsDirect Voice of the Customer thought leader interview examines how DreamWorks Animation is building a multipurpose, all-inclusive, and agile data center capability.

Learn here why a new era of responsive and dynamic IT infrastructure is demanded, and how one high-performance digital manufacturing leader aims to get there sooner rather than later. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how an entertainment industry innovator leads the charge for bleeding-edge IT-as-a-service capabilities is Jeff Wike, CTO of DreamWorks Animation in Glendale, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us why the older way of doing IT infrastructure and hosting apps and data just doesn't cut it anymore. What has made that run out of gas?

Wike: You have to continue to improve things. We are in a world where technology is advancing at an unbelievable pace. The amount of data, the capability of the hardware, the intelligence of the infrastructure are coming. In order for any business to stay ahead of the curve -- to really drive value into the business – it has to continue to innovate.

Gardner: IT has become more pervasive in what we do. I have heard you all refer to yourselves as digital manufacturing. Are the demands of your industry also a factor in making it difficult for IT to keep up?

Wike: When I say we are a digital manufacturer, it’s because we are a place that manufacturers content, whether it's animated films or TV shows; that content is all made on the computer. An artist sits in front of a workstation or a monitor, and is basically building these digital assets that we put through simulations and rendering so in the end it comes together to produce a movie.

Wike
That's all about manufacturing, and we actually have a pipeline, but it's really like an assembly line. I was looking at a slide today about Henry Ford coming up with the first assembly line; it's exactly what we are doing, except instead of adding a car part, we are adding a character, we’re adding a hair to a character, we’re adding clothes, we’re adding an environment, and we’re putting things into that environment.

We are manufacturing that image, that story, in a linear way, but also in an iterative way. We are constantly adding more details as we embark on that process of three to four years to create one animated film.

Gardner: Well, it also seems that we are now taking that analogy of the manufacturing assembly line to a higher plane, because you want to have an assembly line that doesn't just make cars -- it can make cars and trains and submarines and helicopters, but you don't have to change the assembly line, you have to adjust and you have to utilize it properly.

So it seems to me that we are at perhaps a cusp in IT where the agility of the infrastructure and its responsiveness to your workloads and demands is better than ever.

Greater creativity, increased efficiency

Wike: That's true. If you think about this animation process or any digital manufacturing process, one issue that you have to account for is legacy workflows, legacy software, and legacy data formats -- all these things are inhibitors to innovation. There are a lot of tools. We actually write our own software, and we’re very involved in projects related to computer science at the studio.

We’ll ask ourselves, “How do you innovate? How can you change your environment to be able to move forward and innovate and still carry around some of those legacy systems?”

How HPE Synergy
Infrastructure Operations

And one of the things we’ve done over the past couple of years is start to re-architect all of our software tools in order to take advantage of massive multi-core processing to try to give artists interactivity into their creative process. It’s about iterations. How many things can I show a director, how quickly can I create the scene to get it approved so that I can hand it off to the next person, because there's two things that you get out of that.

One, you can explore more and you can add more creativity. Two, you can drive efficiency, because it's all about how much time, how many people are working on a particular project and how long does it take, all of which drives up the costs. So you now have these choices where you can add more creativity or -- because of the compute infrastructure -- you can drive efficiency into the operation.

So where does the infrastructure fit into that, because we talk about tools and the ability to make those tools quicker, faster, more real-time? We conducted a project where we tried to create a middleware layer between running applications and the hardware, so that we can start to do data abstraction. We can get more mobile as to where the data is, where the processing is, and what the systems underneath it all are. Until we could separate the applications through that layer, we weren’t really able to do anything down at the core.

Core flexibility, fast

We want to be able to change how we are using that infrastructure -- examine usage patterns, the workflows -- and be able to optimize.
Now that we have done that, we are attacking the core. When we look at our ability to replace that with new compute, and add the new templates with all the security in it -- we want that in our infrastructure. We want to be able to change how we are using that infrastructure -- examine usage patterns, the workflows -- and be able to optimize.

Before, if we wanted to do a new project, we’d say, “Well, we know that this project takes x amount of infrastructure. So if we want to add a project, we need 2x,” and that makes a lot of sense. So we would build to peak. If at some point in the last six months of a show, we are going to need 30,000 cores to be able to finish it in six months, we say, “Well, we better have 30,000 cores available, even though there might be times when we are only using 12,000 cores.” So we were buying to peak, and that’s wasteful.

What we wanted was to be able to take advantage of those valleys, if you will, as an opportunity -- the opportunity to do other types of projects. But because our infrastructure was so homogeneous, we really didn't have the ability to do a different type of project. We could create another movie if it was very much the same as a previous film from an infrastructure-usage standpoint.

By now having composable, or software-defined infrastructure, and being able to understand what the requirements are for those particular projects, we can recompose our infrastructure -- parts of it or all of it -- and we can vary that. We can horizontally scale and redefine it to get maximum use of our infrastructure -- and do it quickly.

Gardner: It sounds like you have an assembly line that’s very agile, able to do different things without ripping and replacing the whole thing. It also sounds like you gain infrastructure agility to allow your business leaders to make decisions such as bringing in new types of businesses. And in IT, you will be responsive, able to put in the apps, manage those peaks and troughs.

Does having that agility not only give you the ability to make more and better movies with higher utilization, but also gives perhaps more wings to your leaders to go and find the right business models for the future?

Wike: That’s absolutely true. We certainly don't want to ever have a reason to turn down some exciting project because our digital infrastructure can’t support it. I would feel really bad if that were the case.

In fact, that was the case at one time, way back when we produced Spirit: Stallion of the Cimarron. Because it was such a big movie from a consumer products standpoint, we were asked to make another movie for direct-to-video. But we couldn't do it; we just didn’t have the capacity, so we had to just say, “No.” We turned away a project because we weren’t capable of doing it. The time it would take us to spin up a project like that would have been six months.

The world is great for us today, because people want content -- they want to consume it on their phone, on their laptop, on the side of buildings and in theaters. People are looking for more content everywhere.

Yet projects for varied content platforms require different amounts of compute and infrastructure, so we want to be able to create content quickly and avoid building to peak, which is too expensive. We want to be able to be flexible with infrastructure in order to take advantage of those opportunities.

HPE Synergy
Infrastructure Operations

Gardner: How is the agility in your infrastructure helping you reach the right creative balance? I suppose it’s similar to what we did 30 years ago with simultaneous engineering, where we would design a physical product for manufacturing, knowing that if it didn't work on the factory floor, then what's the point of the design? Are we doing that with digital manufacturing now?

Artifact analytics improve usage, rendering

We always look at budgets, and budgets can be money budgets, they can be rendering budgets, they can be storage budgets, and networking -- all of those things are commodities that are required to create a project. 
Wike: It’s interesting that you mention that. We always look at budgets, and budgets can be money budgets, it can be rendering budgets, it can be storage budgets, and networking -- I mean all of those things are commodities that are required to create a project.

Artists, managers, production managers, directors, and producers are all really good at managing those projects if they understand what the commodity is. Years ago we used to complain about disk space: “You guys are using too much disk space.” And our production department would say, “Well, give me a tool to help me manage my disk space, and then I can clean it up. Don’t just tell me it's too much.”

One of the initiatives that we have incorporated in recent years is in the area of data analytics. We re-architected our software and we decided we would re-instrument everything. So we started collecting artifacts about rendering and usage. Every night we ran every digital asset that had been created through our rendering, and we also collected analytics about it. We now collect 1.2 billion artifacts a night.

And we correlate that information to a specific asset, such as a character, basket, or chair -- whatever it is that I am rendering -- as well as where it’s located, which shot it’s in, which sequence it’s in, and which characters are connected to it. So, when an artist wants to render a particular shot, we know what digital resources are required to be able to do that.

One of the things that’s wasteful of digital resources is either having a job that doesn't fit the allocation that you assign to it, or not knowing when a job is complete. Some of these rendering jobs and simulations will take hours and hours -- it could take 10 hours to run.

At what point is it stuck? At what point do you kill that job and restart it because something got wedged and it was a dependency? And you don't really know, you are just watching it run. Do I pull the plug now? Is it two minutes away from finishing, or is it never going to finish?

Just the facts

Before, an artist would go in every night and conduct a test render. And they would say, “I think this is going to take this much memory, and I think it's going to take this long.” And then we would add a margin of error, because people are not great judges, as opposed to a computer. This is where we talk about going from feeling to facts.

So now we don't have artists do that anymore, because we are collecting all that information every night. We have machine learning that then goes in and determines requirements. Even though a certain shot has never been run before, it is very similar to another previous shot, and so we can predict what it is going to need to run.

By doing that machine learning and taking the guesswork out of the allocation of resources, we were able to save 15 percent of our render time, which is huge.
Now, if a job is stuck, we can kill it with confidence. By doing that machine learning and taking the guesswork out of the allocation of resources, we were able to save 15 percent of our render time, which is huge.

I recently listened to a gentleman talk about what a difference of 1 percent improvement would be. So 15 percent is huge, that's 15 percent less money you have to spend. It's 15 percent faster time for a director to be able to see something. It's 15 percent more iterations. So that was really huge for us.

Gardner: It sounds like you are in the digital manufacturing equivalent of working smarter and not harder. With more intelligence, you can free up the art, because you have nailed the science when it comes to creating something.

Creative intelligence at the edge

Wike: It's interesting; we talk about intelligence at the edge and the Internet of Things (IoT), and that sort of thing. In my world, the edge is actually an artist. If we can take intelligence about their work, the computational requirements that they have, and if we can push that data -- that intelligence -- to an artist, then they are actually really, really good at managing their own work.

It's only a problem when they don't have any idea that six months from now it's going to cause a huge increase in memory usage or render time. When they don't know that, it's hard for them to be able to self-manage. But now we have artists who can access Tableau reports everyday and see exactly what the memory usage was or the compute usage of any of the assets they’ve created, and they can correct it immediately.

On Megamind, a film DreamWorks Animation released several years ago, it was prior to having the data analytics in place, and the studio encountered massive rendering spikes on certain shots. We really didn't understand why.

After the movie was complete, when we could go back and get printouts of logs to analyze, we determined that these peaks in rendering resources were caused by his watch. Whenever the main character’s watch was in a frame, the render times went up. We looked at the models, and well-intended artists had taken a model of a watch and every gear was modeled, and it was just a huge, heavy asset to render.

But it was too late to do anything about it. But now, if an artist were to create that watch today, they would quickly find out that they had really over-modeled that watch. We would then need to go in and reduce that asset down, because it's really not a key element to the story. And they can do that today, which is really great.

HPE Synergy
Infrastructure Operations

Gardner: I am a big fan of animated films, and I am so happy that my kids take me to see them because I enjoy them as much as they do. When you mention an artist at the edge, it seems to me it’s more like an army at the edge, because I wait through the end of the movie, and I look at the credits scroll -- hundreds and hundreds of people at work putting this together.

So you are dealing with not just one artist making a decision, you have an army of people. It's astounding that you can bring this level of data-driven efficiency to it.

Movie-making’s mobile workforce

If you capture information, you can find so many things that we can really understand better about our creative process to be able to drive efficiency and value into the entire business.
Wike: It becomes so much more important, too, as we become a more mobile workforce. 

Now it becomes imperative to be able to obtain the information about what those artists are doing so that they can collaborate. We know what value we are really getting from that, and so much information is available now. If you capture it, you can find so many things that we can really understand better about our creative process to be able to drive efficiency and value into the entire business.

Gardner: Before we close out, maybe a look into the crystal ball. With things like auto-scaling and composable infrastructure, where do we go next with computing infrastructure? As you say, it's now all these great screens in people's hands, handling high-definition, all the networks are able to deliver that, clearly almost an unlimited opportunity to bring entertainment to people. What can you now do with the flexible, efficient, optimized infrastructure? What should we expect?

Wike: There's an explosion in content and explosion in delivery platforms. We are exploring all kinds of different mediums. I mean, there’s really no limit to where and how one can create great imagery. The ability to do that, the ability to not say “No” to any project that comes along is going to be a great asset.

We always say that we don't know in the future how audiences are going to consume our content. We just know that we want to be able to supply that content and ensure that it’s the highest quality that we can deliver to audiences worldwide.

Gardner: It sounds like you feel confident that the infrastructure you have in place is going to be able to accommodate whatever those demands are. The art and the economics are the variables, but the infrastructure is not.

Wike: Having a software-defined environment is essential. I came from the software side; I started as a programmer, so I am coming back into my element. I really believe that now that you can compose infrastructure, you can change things with software without having to have people go in and rewire or re-stack, but instead change on-demand. And with machine learning, we’re able to learn what those demands are.

I want the computers to actually optimize and compose themselves so that I can rest knowing that my infrastructure is changing, scaling, and flexing in order to meet the demands of whatever we throw at it.

Tuesday, August 1, 2017

Enterprises look for partners to make the most of Microsoft Azure Stack apps

The next BriefingsDirect Voice of the Customer hybrid cloud advancements discussion explores the application development and platform-as-a-service (PaaS) benefits from Microsoft Azure Stack

We’ll now learn how ecosystems of solutions partners are teaming to provide specific vertical industries with applications and services that target private cloud deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore the latest in successful cloud-based applications development and deployment is our panel, Martin van den Berg, Vice President and Cloud Evangelist at Sogeti USA, based in Cleveland, and Ken Won, Director of Cloud Solutions Marketing at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Martin, what are some of the trends that are driving the adoption of hybrid cloud applications specifically around the Azure Stack platform?

Van den Berg: What our clients are dealing with on a daily basis is an ever-expanding data center, they see ever-expanding private clouds in their data centers. They are trying to get into the hybrid cloud space to reap all the benefits from both an agility and compute perspective.

van den Berg

They are trying to get out of the data center space, to see how the ever-growing demand can leverage the cloud. What we see is that Azure Stack will bridge the gap between the cloud that they have on-premises, and the public cloud that they want to leverage -- and basically integrate the two in a true hybrid cloud scenario.

Gardner: What sorts of applications are your clients calling for in these clouds? Are these cloud-native apps, greenfield apps? What are they hoping to do first and foremost when they have that hybrid cloud capability?

Van den Berg: We see a couple of different streams there. One is the native-cloud development. More and more of our clients are going into cloud-native development. We recently brought out a white paper wherein we see that 30 percent of applications being built today are cloud-native already. We expect that trend to grow to more than 60 percent over the next three years for new applications.

HPE Partnership Case Studies
of Flex Capacity Financing

The issue that some of our clients have has to do with some of the data being consumed in these applications. Either due to compliance issues, or that their information security divisions are not too happy, they don’t want to put this data in the public cloud. Azure Stack bridges that gap as well.
 
They can leverage the whole Azure public cloud PaaS while still having their data on-premises in their own data center. That's a unique capability.
Microsoft Azure Stack can bridge the gap between the on-premises data center and what they do in the cloud. They can leverage the whole Azure public cloud PaaS while still having their data on-premises in their own data center. That's a unique capability.

On the other hand, what we also see is that some of our clients are looking at Azure Stack as a bridge to gap the infrastructure-as-a-service (IaaS) space. Even in that space, where clients are not willing to expand their own data center footprint, they can use Azure Stack as a means to seamlessly go to the Azure public IaaS cloud.

Gardner: Ken, does this jibe with what you are seeing at HPE, that people are starting to creatively leverage hybrid models? For example, are they putting apps in one type of cloud and data in another, and then also using their data center and expanding capacity via public cloud means?

Won

Won: We see a lot of it. The customers are interested in using both private clouds and public clouds. In fact, many of the customers we talk to use multiple private clouds and multiple public clouds. They want to figure out how they can use these together -- rather than as separate, siloed environments. The great thing about Azure Stack is the compatibility between what’s available through Microsoft Azure public cloud and what can be run in their own data centers.

The customer concerns are data privacy, data sovereignty, and security. In some cases, there are concerns about application performance. In all these cases, it's a great situation to be able to run part or all of the application on-premises, or on an Azure Stack environment, and have some sort of direct connectivity to a public cloud like Microsoft Azure.

Because you can get full API compatibility, the applications that are developed in the Azure public cloud can be deployed in a private cloud -- with no change to the application at all.

Gardner: Martin, are there specific vertical industries gearing up for this more than others? What are the low-lying fruit in terms of types of apps?

Hybrid healthcare files

Van den Berg: I would say that hybrid cloud is of interest across the board, but I can name a couple of examples of industries where we truly see a business case for Azure Stack.

One of them is a client of ours in the healthcare industry. They wanted to standardize on the Microsoft Azure platform. One of the things that they were trying to do is deal with very large files, such as magnetic resonance imaging (MRI) files. What they found is that in their environment such large files just do not work from a latency and bandwidth perspective in a cloud.

With Microsoft Azure Stack, they can keep these larger files on-premises, very close to where they do their job, and they can still leverage the entire platform and still do analytics from a cloud perspective, because that doesn’t require the bandwidth to interact with things right away. So this is a perfect example where Azure Stack bridges the gap between on-premises and cloud requirements while leveraging the entire platform.

Gardner: What are some of the challenges that these organizations are having as they move to this model? I assume that it's a little easier said than done. What's holding people back when it comes to taking full advantage of hybrid models such as Azure Stack?

Van den Berg: The level of cloud adoption is not really yet where it should be. A lot of our clients have cloud strategies that they are implementing, but they don't have a lot of expertise yet on using the power that the platform brings.

Some of the basic challenges that we need to solve with clients are that they are still dealing with just going to Microsoft Azure cloud and the public cloud services. Azure Stack simplifies that because they now have the cloud on-premises. With that, it’s going to be easier for them to spin-up workload environments and try this all in a secure environment within their own walls, their own data centers.

Should a specific workload go in a private cloud, or should another workload go in a public cloud?
Won: We see a similar thing with our client base as customers look to adopt hybrid IT environments, a mix of private and public clouds. Some of the challenges they have include how to determine which workload should go where. Should a specific workload go in a private cloud, or should another workload go in a public cloud?

We also see some challenges around processes, organizational process and business process. How do you facilitate and manage an environment that has both private and public clouds? How do you put the business processes in place to ensure that they are being used in the proper way? With Azure Stack -- because of that full compatibility with Azure -- it simplifies the ability to move applications across different environments.

Gardner: Now that we know there are challenges, and that we are not seeing the expected adoption rate, how are organizations like Sogeti working in collaboration with HPE to give a boost to hybrid cloud adoption?

Strategic, secure, scalable cloud migration 

Van den Berg: As the Cloud Evangelist with Sogeti, for the past couple of years I have been telling my clients that they don’t need a data center. The truth is, they probably need some form of on-premises still. But the future is in the clouds, from a scalability and agility perspective -- and the hyperscale with which Microsoft is building out their Azure cloud capabilities, there are no enterprise clients that can keep up with that. 

We try to help our clients define strategy, help them with governance -- how do they approach cloud and what workloads can they put where based on their internal regulations and compliance requirements, and then do migration projects.
The future is in the clouds, from a scalability and agility perspective.

We have a service offering called the Sogeti Cloud Assessment, where we go in and evaluate their application portfolio on their cloud readiness. At the end of this engagement, we start moving things right away. We have been really successful with many of our clients in starting to move workloads to the cloud.

Having Azure Stack will make that even easier. Now when a cloud assessment turns up some issues on moving the Microsoft Azure public cloud -- because of compliance or privacy issues or just comfort (sometimes the information security departments just don't feel comfortable moving certain types of data to a public cloud setting) -- we can move those applications to the cloud, leverage the full power and scalability of the cloud while keeping it within the walls of our clients’ data centers. That’s how we are trying to accelerate the cloud adoption, and we truly feel that Azure Stack bridges that gap.

HPE Partnership Case Studies
of Flex Capacity Financing

Gardner: Ken, same question, how are you and Sogeti working together to help foster more hybrid cloud adoption?

Won: The cloud market has been maturing and growing. In the past, it’s been somewhat complicated to implement private clouds. Sometimes these private clouds have been incompatible with each other, and with the public clouds.

In the Azure Stack area, now we have almost an appliance-like experience where we have systems that we build in our factories that we pre-configure, pretest, and get them into the customers’ environment so that they can quickly get their private cloud up and running. We can help them with the implementation, set it up so that Sogeti can help with the cloud-native applications work.
 
With Sogeti and HPE working together, we make it much simpler for companies to adopt the hybrid cloud models and to quickly see the benefit of moving into a hybrid environment.
Sogeti and HPE work together to make it much simpler for companies to adopt the hybrid cloud models.

Van den Berg: In talking to many of our clients, when we see the adoption of private cloud in their organizations -- if they are really honest -- it doesn't go very far past just virtualization. They truly haven't leveraged what cloud could bring, not even in a private cloud setting.

So talking about hybrid cloud, it is very hard for them to leverage the power of hybrid clouds when their own private cloud is just virtualization. Azure Stack can help them to have a true private cloud within the walls of their own data centers and so then also leverage everything that Microsoft Azure public cloud has to offer.

Won: I agree. When they talk about a private cloud, they are really talking about virtual  machines, or virtualization. But because the Microsoft Azure Stack solution provides built-in services that are fully compatible with what's available through Microsoft Azure public cloud, it truly provides the full cloud experience. These are the types of services that are beyond just virtualization running within the customers’ data center.

Keep IT simple

I think Azure Stack adoption will be a huge boost to organizations looking to implement private clouds in their data centers.

Gardner: Of course your typical end-user worker is interested primarily their apps, they don’t really care where they are running. But when it comes to getting new application development, rapid application development (RAD), these are some of the pressing issues that most businesses tell us concern them.

So how does RAD, along with some DevOps benefits, play into this, Martin? How are the development people going to help usher in cloud and hybrid cloud models because it helps them satisfy the needs of the end-users in terms of rapid application updates and development?

Van den Berg: This is also where we are talking about the difference between virtualization, private cloud, hybrid clouds, and definitely cloud services. So for the application development staff, they still run in the traditional model, they still run into issues in provisioning of their development environments and sometimes test environments.

A lot of cloud-native application development projects are much easier because you can spin-up environments on the go. What Azure Stack is going to help with is having that environment within the client’s data center; it’s going to help the developers to spin up their own resources.

There is going to be on-demand orchestration and provisioning, which is truly beneficial to application development -- and it's really beneficial to the whole DevOps suite.

There is going to be on-demand orchestration and provisioning, which is truly beneficial to application development -- and it's really beneficial to the whole DevOps suite
We need to integrate business development and IT operations to deliver value to our clients. If we are waiting multiple weeks for development and the best environment to spin up -- that’s an issue our clients are still dealing with today. That’s where Azure Stack is going to bridge the gap, too.

Won: There are a couple of things that we see happening that will make developers much more productive and able to bring new applications or updates quicker than ever before. One is the ability to get access to these services very, very quickly. Instead of going to the IT department and asking them to spin up services, they will be able to access these services on their own.

The other big thing that Azure Stack offers is compatibility between private and public cloud environments. For the first time, the developer doesn't have to worry about what the underlying environment is going to be. They don’t have to worry about deciding, is this application going to run in a private cloud or a public cloud, and based on where it’s going, do they have to use a certain set of tools for that particular environment.

Now that we have compatibility between the private cloud and the public cloud, the developer can just focus on writing code, focus on the functionality of the application they are developing, knowing that that application now can easily be deployed into a private cloud or a public cloud depending on the business situation, the security requirements, and compliance requirements.

So it’s really about helping the developers become more effective and helping them focus more on code development and applications rather than having them worry about the infrastructure, or waiting for infrastructure to come from the IT department.

HPE Partnership Case Studies
of Flex Capacity Financing

Gardner: Martin, for those organizations interested in this and want to get on a fast track, how does an organization like Sogeti working in collaboration with HPE help them accelerate adoption?

Van den Berg: This is where we heavily partner with HPE, to bring the best solutions to our clients. We have all kinds of proof of concepts, we have accelerators, and one of the things that we talked about already is making developers get up to speed faster. We can truly leverage those accelerators and help our clients adopt cloud, and adopt all the services that are available on the hybrid platform.

We have all heard the stories about standardizing on micro-services, on a server fabric, or serverless computing, but developers have not had access to this up until now and IT departments have been slow to push this to the developers.

The accelerators that we have, the approaches that we have, and the proofs of concept that we can do with our client -- together with HPE --  are going to accelerate cloud adoption with our clientele. 

Gardner: Any specific examples, some specific vertical industry use-cases where this really demonstrates the power of the true hybrid model?

When the ship comes in

Won: I can share a couple of examples of the types of companies that we are working with in the hybrid area, and what places that we see typical customers using Azure Stack.

People want to implement disconnected applications or edge applications. These are situations where you may have a data center or an environment running an application that you may either want to run in a disconnected fashion or run to do some local processing, and then move that data to the central data center.

One example of this is the cruise ship industry. All large cruise ships have essentially data centers running the ship, supporting the thousands of customers that are on the ship. What the cruise line vendors want to do is put an application on their many ships and to run the same application in all of their ships. They want to be able to disconnect from connectivity of the central data center while the ship is out at sea and to do a lot of processing and analytics in the data center, in the ship. Then when the ship comes in and connects to port and to the central data center, it only sends the results of the analysis back to the central data center.

This is a great example of having an application that can be developed once and deployed in many different environments, you can do that with Azure Stack. It’s ideal, running that same application in multiple different environments, in either disconnected or connected situations.

Van den Berg: In the financial services industry, we know they are heavily regulated. We need to make sure that they are always in compliance.

So one of the things that we did in the financial services industry with one of our accelerators, we actually have a tool called Sogeti OneShare. It’s a portal solution on top of Microsoft Azure that can help you with orchestration, which can help you with the whole DevOps concept. We were able to have the edge node be Azure Stack -- building applications, have some of the data reside within the data center on the Azure Stack appliance, but still leverage the power of the clouds and all the analytics performance that was available there.

That's what DevOps is supposed to deliver -- faster value to the business, leveraging the power of clouds.
Van den Berg: In talking to many of our clients, when we see the adoption of private cloud in their organizations -- if they are really honest -- it doesn't go very far past just virtualization. They truly haven't leveraged what cloud could bring, not even in a private cloud setting.

So talking about hybrid cloud, it is very hard for them to leverage the power of hybrid clouds when their own private cloud is just virtualization. Azure Stack can help them to have a true private cloud within the walls of their own data centers and so then also leverage everything that Microsoft Azure public cloud has to offer. We just did a project in this space and we were able to deliver functionality to the business from start of the project in just eight weeks. They have never seen that before -- the project that just lasts eight weeks and truly delivers business value. That's the direction that we should be taking. That’s what DevOps is supposed to deliver -- faster value to the business, leveraging the power of clouds.

Gardner: Perhaps we could now help organizations understand how to prepare from a people, process, and technology perspective to be able to best leverage hybrid cloud models like Microsoft Azure Stack.

Martin, what do you suggest organizations do now in order to be in the best position to make this successful when they adopt?

Be prepared

Van den Berg: Make sure that the cloud strategy and governance are in place. That's one of the first things this should always start with.

Then, start training developers, and make sure that the IT department is the broker of cloud services. In the traditional sense, it is always normal that the IT department is the broker for everything that is happening on-premises within the data center. In the cloud space, this doesn’t always happen. In the cloud space, because it is so easy to spin-up things, sometimes the line of business is deploying.

We try to enable IT departments and operators within our clients to be the broker of cloud services and to help with the adoption of Microsoft Azure cloud and Azure Stack. That will help bridge the gap between the clouds and the on-premises data centers.

Gardner: Ken, how should organizations get ready to be in the best position to take advantage of this successfully?

Mapping the way

Won: As IT organizations look at this transformation to hybrid IT, one of the most important things is to have a strong connection to the line of business and to the business goals, and to be able to map those goals to strategic IT priorities.

Once you have done this mapping, the IT department can look at these goals and determine which projects should be implemented and how they should be implemented. In some cases, they should be implemented in private clouds, in some cases public clouds, and in some cases across both private and public cloud.

The task then changes to understanding the workloads, the characterization of the workloads, and looking at things such as performance, security, compliance, risk, and determining the best place for that workload.

Then, it’s finding the right platform to enable developers to be as successful and as impactful as possible, because we know ultimately the big game changer here is enabling the developers to be much more productive, to bring applications out much faster than we have ever seen in the past.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.


You may also be interested in: