Wednesday, March 29, 2017

TasmaNet ups its cloud game to deliver a regional digital services provider solution

The next BriefingsDirect Voice of the Customer cloud adoption patterns discussion explores how integration of the latest cloud tools and methods help smooth out the difficult task of creating and maintaining cloud-infrastructure services contracts.

The results are more flexible digital services that both save cloud consumers money and provide the proper service levels and performance characteristics for each unique enterprise and small business.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Stay with us now as we hear from a Tasmanian digital services provider, TasmaNet, about their solution-level approach to cloud services attainment, especially from mid-market enterprises. To share how proper cloud procurement leads to new digital business innovations, we're joined by Joel Harris, Managing Director of TasmaNet in Hobart, Tasmania. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s start at a high level, looking at the trends that are driving how cloud services are affecting how procurement is going to be done in 2017. What has changed, in your opinion, in how enterprises are reacting to and leveraging the cloud services nowadays? 

Harris: We're seeing a real shift in markets, particularly with the small- and medium-sized businesses (SMBs) in their approach and adoption of cloud services. More and more, there is an acceptance that it's okay to buy products off the Internet. We see it  every day within personal cloud, iPhones, the Apple Store, and Google Play to buy movies. So, there is now the idea in the workplace that it's acceptable to procure business services online through cloud providers. 

Because of the success of personal cloud with companies such as Apple, there's a carry-over in that there is an assumed equivalent success in the commercial sense, and unfortunately, that can cause some problems. What we're seeing is a willingness to start procuring from public, and also some private cloud as well, which is really good. What we're finding, though, is a lack of awareness about what it means for businesses to buy from a cloud provider.

Gardner: What is it that the people might have wrong? What is it that they've not seen in terms of where the real basis for value comes when you create a proper cloud relationship? 
Solutions for
Hybrid and Private Cloud

IT Infrastructure
Harris: Look at the way personal cloud is procured, a simple click, a simple install, and you have the application. If you don’t like it, you can delete it. 

When you come into a commercial environment, it’s not that simple, although there can a perception that it is. When you're looking at your application, the glossy picture, it may talk about functionality, business improvement, future savings, and things like that. But when you come to the implementation of a cloud product or a cloud service into a business, the business needs to make sure that it has met its service levels, from internal business requirements or external business requirements, and from customers and markets. 

Harris
But you also need to make sure that it has also married up the skills of your workforce. Cloud services are really just a tool for a business to achieve an outcome. So, you're either arming someone in the workforce with the tool and skills to achieve an outcome or you're going to use a service from a third-party to achieve an outcome. 

Because we're still very early in the days of cloud being adopted by SMBs, the amount of work being put into the marrying up of the capabilities of a product, or the imagined capabilities of a product, for future benefits to internal business processes and systems is clearly not as mature as we would like. Certainly, if you look into the marketplace, the availability of partners and skills to help companies with this is also lacking at the moment. 

Cloud Costs

Then, comes the last part that we talked about, which is removing or changing the application. At the moment, a lot of SMBs are still using traditional procurement. Maybe they want a white car. Well, in cloud services there’s always the ability to change the color, but it does come at a cost. There’s traditionally a variation fee or similar charge.

SMBs are getting themselves in a bit of trouble when they say they would like a white car with four seats, and then, later on, find that they actually needed five seats and a utility. How do they go about changing that? 

The cost of change is something that sometimes gets forgotten in those scenarios. Our experience over the last two years is companies overlooking the cost of change when under a cloud-services contract. 

Gardner: I've also heard you say, Joel, that cloud isn’t for everyone, what do you mean by that? How would a company know whether cloud is the right fit for it or not?

Harris: Simply look for real, deep understanding of your business. Coming back to the ability to link up service levels, it's the ability to have a clear view into the future of what a company needs to achieve its outcomes. If you can’t answer those questions for your customer, or the customer can’t answer the questions for you as a cloud provider, then I would advise you to take a step back and really start a new process of understanding what it is the customer wants out of the cloud product. 

Change later on can cost, and small businesses don’t have an amount of money to go in there and continue funding a third party to change the implementation of what, in most cases, becomes a core piece of software in an organization.

Gardner: For the organizations that you work with that are exploring deeper relationships to private cloud, do you find that they're thinking of the future direction as well or thinking of the strategy that they’d like to go hybrid and ultimately perhaps more public cloud? Is that the common view for those that are ready for cloud now?

Harris: In the enterprise, yes. We're definitely seeing a huge push by organizations that understand the breakdown of applications between suitable for private cloud and suitable for public cloud. 

As you come down into the SMB market, that line blurs a little bit. We have some companies that wish to put everything in the cloud because it’s easy and that’s the advice they were given. Or, you have people who think they have everything in the cloud, but it’s really a systems integrator that has now taken their servers, put them in a data center, and is managing them as more of a hosted, managed solution. 

Unfortunately, what we are seeing is that a lot of companies don’t know the difference between moving into the cloud and having a systems integrator manage their hardware for them in a data center where they don’t see it.

There's definitely a large appetite for moving to the as-a-service model in companies that have a C-suite or some level of senior management with ownership of business process. So, if there is a Chief Information Officer (CIO) or a Chief Technology Officer (CTO) or some sort of very senior Information Technology (IT) person that has a business focus on the use of technology, we're seeing a very strong review of what the company does and why and how things should be moved to either hybrid or 100 percent in either direction.

Gardner: So, clearly the choices you make around cloud affect the choices you make as a business; there really is a transformational aspect to this. Therefore, the contract, that decision document of how you proceed with your cloud relationship, is not just an IT document; it’s really a business document. Tell us why getting the contract right is so important.

Harris: It’s very, very important to involve all the areas of a business when going into a cloud services contract.

Ecosystems of Scale

Gardner: And it’s no longer really one relationship. That is to say that a contract isn’t often just between one party and another. As we're finding out, this is an ecosystem, a team sport, if you will. How does the contract incorporate the need for an ecosystem and how does TasmaNet help solve that problem of relationship among multiple parties?

Harris: Traditionally, if we look at the procurement department of a company, the procurement department would draft a tender, negotiate a contract between the supplier and the company, and then services would begin to flow, or whatever product was purchased would be delivered. 

More and more, though, in the cloud services contract, the procurement department has little knowledge of the value of the information or the transaction that’s happening between the company and the supplier, and that can be quite dangerous. Even though cloud can be seen as a commodity item, the value of the services that come over the top is very much not a commodity item. It’s actually a high-value item that, in most cases, is something relevant to keeping the company operating.

What we found at TasmaNet was that a lot of the companies moving to cloud don’t have the tools to manage the contract. They're familiar with traditional procurement arrangements, but in managing a services contract or a cloud services contract, if we want to focus on what TasmaNet provides, you need to know a number of different aspects. 

We created an ecosystem and we said that we were going to create this ecosystem with all of the tools required for our customers. We put in a portal, so that the finance manager can look at the financial performance of the services. Does it meet budget expectations, is it behaving correctly, are we achieving the business outcomes for the dollars that we said it was going to cost?

Then, on the other side, we have a different portal that’s more for the technology administrator about ensuring that the system is performing within the service-level agreements (SLAs) that have been documented either between the company and the service provider or the IT department and the big internal business units. 

It’s important to understand there are probably going to be multiple service levels here, not only between the service provider and the customer, but also the customer and their internal customers. So, it’s important to make sure that they're managed all the way through. 

We provide a platform so that people can monitor end to end from the customers using, all the way through to the financial manager on the other side.

Gardner: We've seen the importance of the contract. We understand that this is a complex transaction that can involve multiple players. But I think there is also another shift when we move from a traditional IT environment to a cloud environment and then ultimately to a hybrid cloud environment, and that’s around skills. What are you seeing that might be some dissonance between what was the skill set before and what we can expect the new skill set for cloud computing success to be?

Sea Change

Harris: We are seeing a huge change, and sometimes this change is very difficult for the people involved. We see that with cloud services coming along, the nature of the tool is changing. A lot of people traditionally have been trained in a single skill set, such as storage or virtualization. Once you start to bring in cloud services, you're actually bundling a bunch of individual tools and infrastructure together to become one, and all of a sudden, that worker or that individual now has a tool that is made up of an ecosystem of tools. Therefore, their understanding of those different tools and how they report on it and the related elements change.

We see a change from people doing to controlling. We might see a lot of planning to try to avoid events, rather than responding to them. It really does change the ecosystem in your workforce, and it’s probably one of the biggest areas where we see risk arise when people are moving to a cloud-services contract.

Gardner: Is there something also in the realm of digital services, rather than just technology, that larger category of digital services, business-focused outcomes? Is that another thing that we need to take into consideration as organizations are thinking about the right way to transform to be of, for, and by the cloud?
Solutions for
Hybrid and Private Cloud

IT Infrastructure
Harris: It comes back to a business understanding. It's being able to put a circle around something that’s a process or something we could buy from someone else. We know how important it is to the company, we know what it costs the company, and we know the service levels needed around that particular function. Therefore, we can put it out to the market to evaluate. Should we be looking to buy this as a digital service, should we be looking to outsource the process, or should we be looking to have it internally on our own infrastructure and continue running it?

Those questions and the fact-finding that goes into that at the moment is one of the most important things I encourage a customer looking at cloud services to spend a lot of time on. It’s actually one of the key reasons why we have such a strong partnership at Hewlett Packard Enterprise (HPE). The hardware and infrastructure is so strong and good, the skill set and the programs that we can access to work with our customers to pull out information and put it up into things like enterprise nets to understand what the landscape looks like in a customer is just as important as the infrastructure itself.

Gardner: So, the customer needs to know themselves and see how they fit into these new patterns of business, but as you are a technologist, you also have to have a great deal of visibility into what's going on within your systems, whether they're on your premises, or within a public-private cloud continuum of some kind. Tell me about the TasmaNet approach and how you're using HPE products and solutions to gain that visibility to know yourself even as you are transforming.

Harris: Sure, so a couple of the functions that we use with HPE … They have a very good [cloud workload suitability] capability set called HPE Aura with which they can sit down with us and work through the total cost of ownership for an organization. That’s not just at an IT level, but it's for almost anything, to look at the work with the accounting team, to look at the total cost, from the electricity, through to dealing with resources, the third party contractors in construction teams. That gives us a very good baseline and understanding of how much it costs today, which is really important for people to understand. 

Then, we also have other capabilities. We work with HPE to model data about the what-if. It's very important to have that capability when working with a third-party on understanding whether or not you should move to cloud. 

Gardner: Your comments, Joel, bring me back to a better understanding of why a static cloud services contract really might be a shackle on your ability to innovate. So how do you recognize that you need to know what you don't know going into cloud, and therefore put in place the ability to react in some sort of a short-term basis iterate, what kind of contract allows for that dynamic ability to change? How do you begin to think about a contract that is not static?

Harris: We don’t know the answer yet. We're doing a lot of work with our current customers and with HPE to look at that. Some of the early options we are looking at is that when we create a master services agreement with a company, even for something that may be considered a commodity, we ensure that we put in a great plan around innovation, risk management framework side, and continuous service improvement. Then there's a conduit for information to flow between the two parties around business information, which can then feed into the use of the services that we provided.

I think we still have a long way to go, because there's a certain maturity required. We're essentially becoming a part of another company, and that’s difficult for people to swallow, even though they accept using a cloud services contract. We're essentially saying, "Can we have a key to your data center, or the physical front door of your office?"

If that’s disconcerting for someone, well, it should be equally disconcerting that they're moving to cloud, because we need access to those physical environments, the people face-to-face, the business plan, the innovation plan, and to how they manage risk in order to ensure that there is a successful adoption of cloud not just today, but also going forward.

Gardner: Clearly, the destiny of you and your clients is tied closely together. You need to make them successful, they need to let you show them the tools and the new flexible nature and you need to then rely on HPE to give you the means to create those dashboards and have that visibility. It really is a different kind of relationship, co-dependence, you might say.

Harris: The strength that TasmaNet will have going forward is the fact that we're operating under a decentralized model. We work with HPE, so that we can have a workforce on the ground closer to the customer. The model of having all of your cloud services in one location, a thousand kilometers away from the customer, while technically capable, we don’t believe is the right mix in client-supplier relationships. We need to make sure that physically there are people on the ground to work hand-in-hand with the business management and others to ensure that we have a successful outcome. 

That’s one of the strong key parts to the relationship between HPE and TasmaNet. TasmaNet is now a certified services provider with HPE, which lets us use their workforce anywhere around Australia and work with companies that want to utilize TasmaNet services.

Gardner: Help our readers and listeners understand that your regional reach is primarily in Tasmania but you're also in Australia and you have some designs and plans for an even  larger expansion. Tell us about your roadmap?

No Net is an Island - Tasmania and Beyond

Harris: Over the last few years, we've really been spending time gathering information from a couple of early contracts to understand the relationship between a cloud provider and a customer. In the last six months, we put that into a product that we actually call  TasmaNet Core, which is our new system for delivering digital services.

During the next 18 months we are working with some large contracts that we have won down here in Tasmania, having just signed one for the state government. We certainly have a number of opportunities and pathways to start deploying services and working with the state government on how cloud can deliver better business outcomes for them. We need to make sure we really understand and document clearly how we achieve success here in Tasmania.

Then, our plan is, as a company, to push this out to the national level. There are a lot of regional places throughout Australia that require cloud services, and more and more companies like TasmaNet will move into those regional areas. We think it’s important that they aren’t forgotten and we also think that for any business that can be developed in Tasmania and operate successfully, there is no reason why it can’t be replicated to regional areas around Asia-Pacific as required.

Gardner: Joel, let’s step back a moment and look at how to show, rather than tell, what we mean, in the new era of cloud, by a proper cloud adoption. Do you have any examples, either named or generic, where we can look at how this unfolded and what the business  benefits have been when it’s done well?
Solutions for
Hybrid and Private Cloud

IT Infrastructure
Harris: One of our customers, about three years ago, moved into a cloud services environment, which was very successful for the company. But what we found was that some of the contracts with their software services, while they enabled them to move into a cloud provider, added a level of complexity that make the platform very difficult to manage ongoing. 

Over a number of years, we worked with them to remove that key application from the cloud environment. It’s really important that, as a cloud provider, we understand what’s right for the customer. At the end of the day, if there's something that’s not working for the customer, we must work with them to get results.

It worked out successfully. We have a very strong relationship with the company. There's a local company down here called TT-Line, which operates some boat vessels for shipping between Tasmania and Mainland Australia, and because of the platform, we had to find the right mix. That’s really important and I know HPE uses it as a catch phrase. 

This is a real-world example of where it’s important to find the right mix between putting your workloads in the appropriate place. It has to work both ways. It’s easy to come in to a cloud provider. We need to make sure it’s also easy to step back out as well, if it doesn’t work.

Now, we're working with that company to deeply understand the rest of the business to see what are the workloads that can come out of TasmaNet, and what are the workloads that need to even move internally or actually move to an application-specific hosting environment?

Gardner: Before we close out, Joel, I'd like to look a bit to the future. We spoke earlier about how private cloud and adjusting your business appropriately to the hosting models that we’ve described is a huge step, but of course, the continuum is beyond that. It goes to hybrid. There are public cloud options, data placement, and privacy concerns that people are adjusting to in terms of location of data, jurisdictions, and so forth. Tell me about where you see it going and how an organization like yours adjusts to companies as they start to further explore that hybrid-cloud continuum?

Hybrid Offspring

Harris: Going forward, the network will play probably one of the biggest roles in cloud services in the coming 10 years. More and more, we're seeing software-defined network suppliers come into the marketplace. In Australia, we have a large data center, NEXTDC, which started up their own network to connect all of the data centers. We have Megaport, which is 100 percent software-defined, where you can buy a capacity for up to one hour or long term. As these types of networks become common, it enables more and more the fluid movement of the services on top.

When we start to cross over two of the other really big things happening, which are the Internet of Things (IoT) and 5G, you have, all of a sudden, this connectivity that means data services can be delivered anywhere and that means cloud services can be delivered anywhere.

More and more, you're going to see the collection of data lakes, the collection of information even by small businesses that understand that they want to keep all the information, and analyze it. As they go to cloud service providers, they will demand these data services there, too, and the analysis capabilities will become very, very powerful.

In the short term, the network is going to be the key enabler for things such as IoT, which will then flow on to support a distributed model for cloud providers in the next 10 years, whereas traditionally we are seeing them centralized into key larger cities. That will change over in the coming years, because there is just too much data to centralize as people start gathering all of this information.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Wednesday, March 22, 2017

Logicalis chief technologist defines the new ideology of hybrid IT

The next BriefingsDirect thought leader interview explores how digital disruption demands that businesses develop a new ideology of hybrid IT.

We'll hear how such trends as Internet of things (IoT), distributed IT, data sovereignty requirements, and pervasive security concerns are combining to challenge how IT operates. And we'll learn how IT organizations are shifting to become strategists and internal service providers, and how that supports adoption of hybrid IT. We will also delve into how converged and hyper-converged infrastructures (HCI) provide an on-ramp to hybrid cloud strategies and adoption. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or Download a copy. 

To help us define a new ideology for hybrid IT, we're joined by Neil Thurston, Chief Technologist for the Hybrid IT Practice at Logicalis Group in the UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why don’t we start at this notion of a new ideology? What’s wrong with the old ideology of IT?

Thurston: Good question. What we are facing now is what we've done for an awfully long time versus what the emerging large hyper-scale providers with cloud, for example, have been developing. 

Thurston
The two clashing ideologies that we have are: Either we continue with the technologies that we've been developing (and the skills and processes that we've developed in-house) and push those out to the cloud, or we adopt the alternative ideology. If we think about things such as Microsoft Azure and the forthcoming Azure Stack, which means that those technologies are pulled from the cloud into our on-premise environments. The two opposing ideologies we have are: Do we push out or do we pull in?

The technologies allow us to operate in a true hybrid environment. By that we mean not having isolated islands of innovation anymore. It's not just standing things up in hybrid hyper-scale environments, or clouds, where you have specific skills, resources, teams and tools to manage those things. Moving forward, we want to have consistency in operations, security, and automation. We want to have a single toolset or control plane that we can put across all of our workloads and data, regardless of where they happen to reside.
Solutions for
Hybrid and Private Cloud

IT Infrastructure
Gardner: One of the things I encounter, Neil, when I talk to Chief information officers (CIO)s, is their concern that as we move to a hybrid environment, they're going to be left with having the responsibility -- but without the authority to control those different elements. Is there some truth to that?

Thurston: I can certainly see where that viewpoint comes from. A lot of our own customers reflect that viewpoint. We're seeing a lot of organizations, where they may have dabbled and cherry-picked from service management and from practices such as ITIL. We're now seeing more pragmatic IT service management (ITSM) frameworks, such as IT4IT, coming to the fore. These are really more about pushing that responsibility level up the stack. 

You're right in that people are becoming more of a supply-chain manager than the actual manager of the hardware, facilities, and everything else within IT. There definitely is a shift toward that, but there are also frameworks coming into play that allow you to deal with that as well. 

Gardner: The notion of shadow IT becoming distributed IT was once a very dangerous and worrisome thing. Now, it has to be embraced and perhaps is positive. Why should we view it as positive?

Out of the shadow

Thurston: The term shadow IT is controversial. Within our organization, we prefer to say that the shadow IT users are the digital users of the business. You have traditional IT users, but you also have digital users. I don’t really think it’s a shadow IT thing; it's that they're a totally different use-case for service consumption. 

But you're right. They definitely need to be serviced by the organizations. They deserve to have the same level of services applied, the same governance, security, and everything else applied to them. 

Gardner: It seems that the new ideology of hybrid IT is about getting the right mix and keeping that mix of elements under some sort of control. Maybe it's simply on the basis of management, or an automation framework of some sort, but you allow that to evolve and see what happens. We don't know what this is going to be like in five years. 

Thurston: There are two pieces of the puzzle. There's the workload, the actual applications and services, and then there's the data. There is more importance placed on the data. Data is the new commodity, the new cash, in our industry. Data is the thing you want to protect. 

The actual workload and service consumption piece is the commodity piece that could be worked out. What you have to do moving forward is protect your data, but you can take more of a brokering approach to the actual workloads. If you can reach that abstraction, then you're fit-for-purpose and moving forward into the hybrid IT world.

Gardner: It’s almost like we're controlling the meta-processes over that abstraction without necessarily having full control of what goes on at those lower abstractions, but that might not be a bad thing. 

Thurston: I have a very quick use-case. A customer of ours for the last five years has been using Amazon Web Services (AWS), and they were getting the feeling they were getting tied into the platform. Their developers over the years had been using more and more of the platform services and they weren’t able to make all that code portable and take it elsewhere. 

This year, they made the transformation and they've decided to develop against Cloud Foundry, an open Platform as a Service (PaaS). They have instances of Cloud Foundry across Pivotal on AWS, also across IBM Bluemix, and across other cloud providers. So, they're now coding once -- and deploying anywhere for the compute workload side. Then, they have a separate data fabric that regulates the data underneath. There are emerging new architectures that help you to deal with this.

Gardner: It's interesting that you just described an ecosystem approach. You're no longer seeing as many organizations that are supplier “XYZ” shops, where 80 or 90 percent of everything would be one brand name. You just described a highly heterogeneous environment. 

Thurston: People have used cloud services, and hyper-scale of cloud services, and have specific use-cases, typically the more temporary types of workloads. Even companies born in the cloud, such as Uber and Netflix, reach those inflection points, where actually going to on-premise was far cheaper. It made compliance to regulations far easier. People are slowly realizing, through what other people are doing -- and also from their own good or bad experiences -- that hybrid IT really is the way forward.

Gardner: And the good news is that if you do bring it back from the cloud or re-factor what you're doing on-premises, there are some fantastic new infrastructure technologies. We are talking about converged infrastructure, hyper-converged infrastructure, software-defined data center (SDDC). At recent HPE Discover events, we've seen more  memory-driven computing, and we’re seeing some interesting new powerful speeds and feeds along those lines. 

So, on the economics and the price-performance equation, the public cloud is good for certain things, but there's some great attraction to some of these new technologies on-premises. Is that the mix that you are trying to help your clients factor?
Solutions for
Hybrid and Private Cloud

IT Infrastructure
Thurston: Absolutely. We're pretty much in parallel with the way that HPE approaches things, with the right mix. We see that in certain industries there's always going to be things like regulated data. Regulated data is really hard to control in a public-cloud space, where you have no real idea where things are. You can’t easily order them physically. 

Having on-premise provides you with that far easier route to regulation, and today’s technologies, the hyper-converged platforms, for example, allow us to really condense the footprint. We don’t need these massive data centers anymore.

We're working with customers where we have taken 10 or 12 racks worth of legacy classic equipment and with a new hyper-converged, we put in less than two racks worth of equipment. So, the actual operational footprint of facilities cost is much less. It makes it a far more compelling argument for those types of use-cases than using public cloud.

Gardner: Then you can mirror that small footprint data center into a geography, if you need it for compliance requirements, or you could mirror it for reasons of business continuity and backup and recovery. So, there are lots of very interesting choices. 

Neil, tell us a little bit about Logicalis. I want to make sure all of our listeners and readers understand who you are and how you fit into helping organizations make these very large strategic decisions.

Cloud-first is not cloud-only 

Thurston: Logicalis is essentially a digital business enabler. We take technologies across multiple areas and help our customers become digital-ready. We cover a whole breadth of technologies. 

I look at the hybrid IT practice, but we also have the more digital-focused parts of our business, such as collaboration and analytics. The hybrid IT side is where we're working with our customers through the pains that they have, through the decisions that they have to make, and very often board-level decisions are made where you have to have a "cloud-first" strategy.

It's unfortunate when that gets interpreted as "cloud-only." There is some process to go through for cloud readiness, because some applications are not going to be fit for the cloud. Some cannot be virtualized; most can, but there are always regulations. Certainly, in Europe at present there is a lot of fear, uncertainty, and doubt (FUD) in the market, and there is a lot of uncertainty around European Union General Data Protection Regulation (EU GDPR), for example, and overall data protection.

There are a lot of reasons why we have to take a bit more of a factored, measured approach to looking at where workloads and data are best placed moving forward, and the models are that you want to operate in.

Gardner: I think HPE agrees with you. Their strategy is to put more emphasis on things like high performance computing (HPC), the workloads of which won't likely be virtualized, that won't work well in a public cloud, one-size-fits-all environment. It's also factoring in the importance of the edge, even thinking about putting the equivalent of a data center on the edge for demands around information for IoT, and analytics and data requirements there as well as the compute requirements.

What's the relationship between HPE and Logicalis? How do you operate as an alliance or as a partnership?

Thurston: We have a very strong partnership. We have a 15- or 16-year relationship with HPE in the UK. As everyone else did, we started out selling service and storage, but we've taken the journey with HPE and with our customers. The great thing about HPE is that they've always managed to innovate, they have always managed to keep up with the curve, and that's really enabled us to work with our customers and decide what the right technologies are. Today, this allows us to work out the right mix for our customers of on-premise and off-premise equipment,

HPE is ahead of the curve in various technologies in our area, and one of those includes HPE Synergy. We're now talking with a lot of our customers about the next curve that’s coming with infrastructure-as-code, and how we can leverage what the possible benefits and outcomes will be of enabling that technology.

The on-ramp to that is that we're using hyper-converged technologies to virtualize all the workloads and make them portable, so that we can then abstract them and place them either within platform services or within cloud platforms, as necessary, as dictated by whatever our security policies dictate.
Solutions for
Hybrid and Private Cloud

IT Infrastructure
Gardner: Getting back to this ideology of hybrid IT, when you have disparate workloads and you're taking advantage of these benefits of platform choice, location, model and so forth, it seems that we're still confronted with that issue of having the responsibility without the authority. Is there an approach that HPE is taking with management, perhaps thinking about HPE OneView that is anticipating that need and maybe adding some value there?

Thurston: With the HPE toolsets, we're able to set things such as policies. Today, we're at Platform 2.5 really, and the inflection that takes us on to the third platform is the policy automation. This is one part that HPE OneView allows us to do across the board. 

It’s policies on our storage resources, policies on our compute resources, and again, policies on non-technology, so quotas on public cloud, and those types of things. It enables us to leverage the software-defined infrastructure that we have underneath to set the policies that define the operational windows that we want our infrastructure to work in, the decisions it’s allowed to make itself within that, and we'll just let it go. We really want to take IT from "high touch" to "low touch," that we can do today with policy, and potentially, in the future with infrastructure as code, to "no touch." 

Gardner: As you say, we are at Platform 2.5, heading rapidly towards Platform 3. Do you have some examples you can point to, customers of yours and HPE’s, and describe how a hybrid IT environment translates into enablement and business benefits and perhaps even economic benefits? 

Time is money

Thurston: The University of Wolverhampton is one of our customers, where we've taken this journey with them with HPE, with hyper-converged platforms, and created a hybrid environment for them. 

Today, the hybrid environment means that we're wholly virtualized on HPE hyper-converged platform. We've rolled the solutions out across their campus. Where we normally would have had disparate clouds, we now have a single plane controlled by OneView that enables them to balance all the workloads across the whole campus, all of their departments. It’s bringing them new capabilities, such as agility, so they can now react a lot quicker. 

Before, a lot of the departments were coming to them with requirements, but those requirements were taking 12 to 16 weeks to actually fulfill. Now, we can do these things from the technology perspective within hours, and the whole process within days. We're talking a factor of 10 here in reduction of time to actually produce services. 

As they say, success breeds success. Once someone sees what the other department is able to do, that generates more questions, more requests, and it becomes a self-fulfilling prophecy. 

We're working with them to enable the next phase of this project. That is to leverage the hyper-scale of public clouds, but again, in a more controlled environment. Today, they're used to the platform. That’s all embedded in. They are reaping the benefits of that from mainly an agility perspective. From an operational perspective, they are reaping the benefits of vastly reduced system, and more importantly, storage administration. 

Storage administrations have had 85 percent savings on their time required to administer the storage by having it wholly virtualized, which is fantastic from their perspective. It means they can concentrate more on developing the next phase, which is embracing or taking this ideology out to the public cloud.

Gardner: Let's look to the future before we wrap this up. What would you like to see, not necessarily from HPE, but what can the vendors, the suppliers, or the public-cloud providers do to help you make that hybrid IT equation work better? 

Thurston: A lot of our mainstream customers always think that they're late into adoption, but typically, they're late into adoption because they're waiting to see what becomes either a de-facto standard that is winning in the market, or they're looking for bodies to create standards. Interoperability between platforms and standards is really the key to driving better adoption.

Today with AWS, Azure, etc., there's no real compatibility that we can take from them. We can only abstract things further up. This is why I think platform as a service, things like Cloud Foundry and open platforms will, for those forward thinkers who want to adopt the hybrid IT, become the future platforms of choice.

Gardner: It sounds like what you are asking for is a multi-cloud set of options that actually works and is attainable. 

Thurston: It’s like networking, with Ethernet. We have had a standard, everyone adheres to it, and it’s a commodity. Everyone says public cloud is a commodity. It is, but unfortunately what we don’t have is the interoperability of the other standards, such as we find in networking. That’s what we need to drive better adoption, moving forward.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or Download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tuesday, March 7, 2017

Converged IoT systems: Bringing the data center to the edge of everything

The next BriefingsDirect thought leadership panel discussion explores the rapidly evolving architectural shift of moving advanced IT capabilities to the edge to support Internet of Things (IoT) requirements.

The demands of data processing, real-time analytics, and platform efficiency at the intercept of IoT and business benefits have forced new technology approaches. We'll now learn how converged systems and high-performance data analysis platforms are bringing the data center to the operational technology (OT) edge.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To hear more about the latest capabilities in gaining unprecedented measurements and operational insights where they’re needed most, please join me in welcoming Phil McRell, General Manager of the IoT Consortia at PTC; Gavin Hill, IoT Marketing Engineer for Northern Europe at National Instruments (NI) in London, and Olivier Frank, Senior Director of Worldwide Business Development and Sales for Edgeline IoT Systems at Hewlett Packard Enterprise (HPE). The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's driving this need for a different approach to computing when we think about IoT and we think about the “edge” of organizations? Why is this becoming such a hot issue?

McRell: There are several drivers, but the most interesting one is economics. In the past, the costs that would have been required to take an operational site -- a mine, a refinery, or a factory -- and do serious predictive analysis, meant you would have to spend more money than you would get back.

For very high-value assets -- assets that are millions or tens of millions of dollars -- you probably do have some systems in place in these facilities. But once you get a little bit lower in the asset class, there really isn’t a return on investment (ROI) available. What we're seeing now is that's all changing based on the type of technology available.

Gardner: So, in essence, we have this whole untapped tier of technologies that we haven't been able to get a machine-to-machine (M2M) benefit from for gathering information -- or the next stage, which is analyzing that information. How big an opportunity is this? Is this a step change, or is this a minor incremental change? Why is this economically a big deal, Olivier?
Frank

Frank: We're talking about Industry 4.0, the fourth generation of change -- after steam, after the Internet, after the cloud, and now this application of IoT to the industrial world. It’s changing at multiple levels. It’s what's happening within the factories and within this ecosystem of suppliers to the manufacturers, and the interaction with consumers of those suppliers and customers. There's connectivity to those different parties that we can then put together.

While our customers have been doing process automation for 40 years, what we're doing together is unleashing the IT standardization, taking technologies that were in the data centers and applying them to the world of process automation, or opening up.

The analogy is what happened when mainframes were challenged by mini computers and then by PCs. It's now open architecture in a world that has been closed.

Gardner: Phil mentioned ROI, Gavin. What is it about the technology price points and capabilities that have come down to the point where it makes sense now to go down to this lower tier of devices and start gathering information?


Hill
Hill: There are two pieces to that. The first one is that we're seeing that understanding more about the IoT world is more valuable than we thought. McKinsey Global Institute did a study that said that by about 2025 we're going to be in a situation where IoT in the factory space is going to be worth somewhere between $1.2 trillion and $3.7 trillion. That says a lot.

The second piece is that we're at a stage where we can make technology at a much lower price point. We can put that onto the assets that we have in these industrial environments quite cheaply.

Then, you deal with the real big value, the data. All three of us are quite good at getting the value from our own respective areas of expertise.

Look at someone that we've worked with, Jaguar Land Rover. In their production sites, in their power train facilities, they were at a stage where they created an awful lot of data but didn't do anything with it. About 90 percent of their data wasn't being used for anything. It doesn't matter how many sensors you put on something. If you can't do anything with the data, it's completely useless.

They have been using techniques similar to what we've been doing in our collaborative efforts to gain insight from that data. Now, they're at a stage where probably 90 percent of their data is usable, and that's the big change.

Collaboration is key

Gardner: Let's learn more about your organizations and how you're working collaboratively, as you mentioned, before we get back into understanding how to go about architecting properly for IoT benefits. Phil, tell us about PTC. I understand you won an award in Barcelona recently.

McRell: That was a collaboration that our three organizations did with a pump and valve manufacturer, Flowserve. As Gavin was explaining, there was a lot of learning that had to be done upfront about what kind of sensors you need and what kind of signals you need off those sensors to come up with accurate predictions.

When we collaborate, we rely heavily on NI for their scientists and engineers to provide their expertise. We really need to consume digital data. We can't do anything with analog signals and we don't have the expertise to understand what kind of signals we need. When we obtain that, then with HPE, we can economically crunch that data, provide those predictions, and provide that optimization, because of HPE's hardware that now can live happily in those production environments.

Gardner: Tell us about PTC specifically; what does your organization do?

McRell: For IoT, we have a complete end-to-end platform that allows everything from the data acquisition gateway with NI all the way up to machine learning, augmented reality, dashboards, and mashups, any sort of interface that might be needed for people or other systems to interact.

In an operational setting, there may be one, two, or dozens of different sources of information. You may have information coming from the programmable logic controllers (PLCs) in a factory and you may have things coming from a Manufacturing Execution System (MES) or an Enterprise Resource Planning (ERP) system. There are all kinds of possible sources. We take that, orchestrate the logic, and then we make that available for human decision-making or to feed into another system.

Gardner: So the applications that PTC is developing are relying upon platforms and the extension of the data center down to the edge. Olivier, tell us about Edgeline and how that fits into this?
Explore
HPE's Edgeline

IoT Systems
Frank: We came up with this idea of leveraging the enterprise computing excellence that is our DNA within HPE. As our CEO said, we want to be the IT in the IoT.

According to IDC, 40 percent of the IoT computing will happen at the edge. Just to clarify, it’s not an opposition between the edge and the hybrid IT that we have in HPE; it’s actually a continuum. You need to bring some of the workloads to the edge. It's this notion of time of insight and time of action. The closer you are to what you're measuring, the more real-time you are.

We came up with this idea. What if we could bring the depth of computing we have in the data center in this sub-second environment, where I need to read this intelligent data created by my two partners here, but also, actuate them and do things with them?

Take the example of an electrical short circuit that for some reason caught fire. You don’t want to send the data to the cloud; you want to take immediate action. This is the notion of real-time, immediate action.

We take the deep compute. We integrate the connectivity with NI. We're the first platform that has integrated an industry standard called PXI, which allows NI to integrate the great portfolio of sensors and acquisition and analog-to-digital conversion technologies into our systems.

Finally, we bring enterprise manageability. Since we have proliferation of systems, system management at the edge becomes a problem. So, we bring our award-winning and millions-of-licenses sold our Integrated Lights-Out (iLO) that we sell in all our ProLiant servers, and we bring that technology at the edge as well.

Gardner: We have the computing depth from HPE, we have insightful analytics and applications from PTC, what does NI bring to the table? Describe the company for us, Gavin?

Working smarter

Hill: As a company, NI is about a $1.2 billion company worldwide. We get involved in an awful lot of industries. But in the IoT space, where we see ourselves fitting within this collaboration with PTC and HPE, is our ability to make a lot of machines smarter.

There are already some sensors on assets, machines, pumps, whatever they may be on the factory floor, but for older or potentially even some newer devices, there are not natively all the sensors that you need to be able to make really good decisions based on that data. To be able to feed in to the PTC systems, the HPE systems, you need to have the right type of data to start off with.

We have the data acquisition and control units that allow us to take that data in, but then do something smart with it. Using something like our CompactRIO System, or as you described, using the PXI platform with the Edgeline products, we can add a certain level of understanding and just a smart nature to these potentially dumb devices. It allows us not only to take in signals, but also potentially control the systems as well.

We not only have some great information from PTC that lets us know when something is going to fail, but we could potentially use their data and their information to allow us to, let’s say, decide to run a pump at half load for a little bit longer. That means that we could get a maintenance engineer out to an oil rig in an appropriate time to fix it before it runs to failure. We have the ability to control as well as to read in.

The other piece of that is that sensor data is great. We like to be as open as possible in taking from any sensor vendor, any sensor provider, but you want to be able to find the needle in the haystack there. We do feature extraction to try and make sure that we give the important pieces of digital data back to PTC, so that can be processed by the HPE Edgeline system as well.
Explore
HPE's Edgeline

IoT Systems
Frank: This is fundamental. Capturing the right data is an art and a science and that’s really what NI brings, because you don’t want to capture noise; it’s proliferation of data. That’s a unique expertise that we're very glad to integrate in the partnership.

Gardner: We certainly understand the big benefit of IoT extending what people have done with operational efficiency over the years. We now know that we have the technical capabilities to do this at an acceptable price point. But what are the obstacles, what are the challenges that organizations still have in creating a true data-driven edge, an IoT rich environment, Phil?

Economic expertise

McRell: That’s why we're together in this consortium. The biggest obstacle is that because there are so many different requirements for different types of technology and expertise, people can become overwhelmed. They'll spend months or years trying to figure this out. We come to the table with end-to-end capability from sensors and strategy and everything in between, pre-integrated at an economical price point.

Speed is important. Many of these organizations are seeing the future, where they have to be fast enough to change their business model. For instance, some OEM discrete manufacturers are going to have to move pretty quickly from just offering product to offering service. If somebody is charging $50 million for capital equipment, and their competitor is charging $10 million a year and the service level is actually better because they are much smarter about what those assets are doing, the $50 million guy is going to go out of business.

McRell
We come to the table with the ability to come and quickly get that factory, get those assets smart and connected, make sure the right people, parts, and processes are brought to bear at exactly the right time. That drives all the things people are looking for -- the up-time, the safety, the yield,  and performance of that facility. It comes down to the challenge, if you don't have all the right parties together with that technology and expertise, you can very easily get stuck on something that takes a very long time to unravel.

Gardner: That’s very interesting when you move from a Capital Expenditure (CAPEX) to an Operational Expenditure (OPEX) mentality. Every little bit of that margin goes to your bottom line and therefore you're highly incentivized to look for whole new categories of ways to improve process efficiency.

Any other hurdles, Olivier, that you're trying to combat effectively with the consortium?

Frank: The biggest hurdle is the level of complexity, and our customers don't know where to start. So, the promise of us working together is really to show the value of this kind of open architecture injected into a 40-year-old process automation infrastructure and demonstrate, as we did yesterday with our robot powered by our HPE Edgeline is this idea that I can show immediate value to the plant manager, to the quality manager, to the operation manager using the data that resides in that factory already, and that 70 percent or more is unused. That’s the value.

So how do you get that quickly and simply? That’s what we're working to solve so that our customers can enjoy the benefit of the technology faster and faster.

Bridge between OT and IT

Gardner: Now, this is a technology implementation, but it’s done in a category of the organization that might not think of IT in the same way as the business side -- back office applications and data processing. Is the challenge for many organizations a cultural one, where the IT organization doesn't necessarily know and understand this operational efficiency equation and vice versa, and how are we bridging that?

Hill: I'm probably going to give you the high-level end from the operational technology (OT) side as well. These guys will definitely have more input from their own domain of expertise. But, that these guys have that piece of information for that part that they know well is exactly why this collaboration works really well.

You have situations with the idea of the IoT, where a lot of people stood up and said, "Yeah, I can provide a solution. I have the answer," but without having a plan -- never mind a solution. But we've done a really good job of understanding that we can do one part of this system, this solution, really well, and if we partner with the people who are really good in the other aspects, we provide real solutions to customers. I don't think anyone can compete with us with at this stage, and that is exactly why we're in this situation.

Frank: Actually, the biggest hurdle is more on the OT side, not really relying on the IT of the company. For many of our customers, the factory's a silo. At HPE, we haven't been selling too much to that environment. That’s also why, when working as a consortium, it’s important to get to the right audience, which is in the factory. We also bring our IT expertise, especially in the areas of security, because at the moment, when you put an IT device in an OT environment, you potentially have problems that you didn’t have before.

We're living in a closed world, and now the value is to open up. Bringing our security expertise, our managed service, our services competencies to that problem is very important.

Speed and safety out in the open

Hill: There was a really interesting piece in the HPE Discover keynote in December, when HPE Aruba started to talk about how they had an issue when they started bringing conferencing and technology out, and then suddenly everything wanted to be wireless. They said, "Oh, there's a bit of a security issue here now, isn’t there? Everything is out there."

We can see what HPE has contributed to helping them from that side. What we're talking about here on the OT side is a similar state from the security aspect, just a little bit further along in the timeline, and we are trying to work on that as well. Again, we have HPE here and they have a lot of experience in similar transformations.

Frank: At HPE, as you know, we have our Data Center and Hybrid Cloud Group and then we have our Aruba Group. When we do OT or our Industrial IoT, we bring the combination of those skills.

For example, in security, we have HPE Aruba ClearPass technology that’s going to secure the industrial equipment back to the network and then bring in wireless, which will enable the augmented-reality use cases that we showed onstage yesterday. It’s a phased approach, but you see the power of bringing ubiquitous connectivity into the factory, which is a challenge in itself, and then securely connecting the IT systems to this OT equipment, and you understand better the kind of the phases and the challenges of bringing the technology to life for our customers.

McRell: It’s important to think about some of these operational environments. Imagine a refinery the size of a small city and having to make sure that you have the right kind of wireless signal that’s going to make it through all that piping and all those fluids, and everything is going to work properly. There's a lot of expertise, a lot of technology, that we rely on from HPE to make that possible. That’s just one slice of that stack where you can really get gummed up if you don’t have all the right capabilities at the table right from the beginning. 

Gardner: We've also put this in the context of IoT not at the edge isolated, but in the context of hybrid computing and taking advantage of what the cloud can offer. It seems to me that there's also a new role here for a constituency to be brought to the table, and that’s the data scientists in the organization, a new trove of data, elevated abstraction of analytics. How is that progressing? Are we seeing the beginnings of taking IoT data and integrating that, joining that, analyzing that, in the context of data from other aspects of the company or even external datasets?

McRell: There are a couple of levels. It’s important to understand that when we talk about the economics, one of the things that has changed quite a bit is that you can actually go in, get assets connected, and do what we call anomaly detection, pretty simplistic machine learning, but nonetheless, it’s a machine-learning capability.

In some cases, we can get that going in hours. That’s a ground zero type capability. Over time, as you learn about a line with multiple assets, about how all these function together, you learn how the entire facility functions, and then you compare that across multiple facilities, at some point, you're not going to be at the edge anymore. You're going to be doing a systems type analytics, and that’s different and combined.

At that point, you're talking about looking across weeks, months, years. You're going to go into a lot of your back-end and maybe some of your IT systems to do some of that analysis. There's a spectrum that goes back down to the original idea of simply looking for something to go wrong on a particular asset.

The distinction I'm making here is that, in the past, you would have to get a team of data scientists to figure out almost asset by asset how to create the models and iterate on that. That's a lengthy process in and of itself. Today, at that ground-zero level, that’s essentially automated. You don't need a data scientist to get that set up. At some point, as you go across many different systems and long spaces of time, you're going to pull in additional sources and you will get data scientists involved to do some pretty in-depth stuff, but you actually can get started fairly quickly without that work.

The power of partnership

Frank: To echo what Phil just said, in HPE we're talking about the tri-hybrid architecture -- the edge, so let’s say close to the things; the data center; and then the cloud, which would be a data center that you don’t know where it is. It's kind of these three dimensions.

The great thing partnering with PTC is that the ThingWorx platform, the same platform, can run in any of those three locations. That’s the beauty of our HPE Edgeline architecture. You don't need to modify anything. The same thing works, whether we're in the cloud, in the data center, or on the Edgeline.

To your point about the data scientists, it's time-to-insight. There are things you want to do immediately, and as Phil pointed out, the notion of anomaly detection that we're demonstrating on the show floor is understanding those nominal parameters after a few hours of running your thing, and simply detecting something going off normal. That doesn't require data scientists. That takes us into the ThingWorx platform.
Explore
HPE's Edgeline

IoT Systems
But then, to the industrial processes, we're involving systems integration partners and using our own knowledge to bring to the mix along with our customers, because they own the intelligence of their data. That’s where it creates a very powerful solution.

Gardner: I suppose another benefit that the IT organization can bring to this is process automation and extension. If you're able to understand what's going on in the device, not only would you need to think about how to fix that device at the right time -- not too soon, not too late -- but you might want to look into the inventory of the part, or you might want to extend it to the supply chain if that inventory is missing, or you might want to analyze the correct way to get that part at the lowest price or under the RFP process. Are we starting to also see IT as a systems integrator or in a process integrator role so that the efficiency can extend deeply into the entire business process?

McRell: It's interesting to see how this stuff plays out. Once you start to understand in your facility -- or maybe it’s not your facility, maybe you are servicing someone's facility -- what kind of inventory should you have on hand, what should you have globally in a multi-tier, multi-echelon system, it opens up a lot of possibilities.

Today PTC provides a lot of network visibility, a lot of spare-parts inventory, management, and systems, but there's a limit to what these algorithms can do. They're really the best that’s possible at this point, except when you now have everything connected. That feedback loop allows you to modify all your expectations in real time, get things on the move proactively so the right person and parts, process, kit, all show up at the right time.

Then, you have augmented reality and other tools, so that maybe somebody hasn't done this service procedure before, maybe they've never seen these parts before, but they have a guided walk-through and have everything showing up all nice and neat the day of, without anybody having to actually figure that out. That's a big set of improvements that can really change the economics of how these facilities run.

Connecting the data

Gardner: Any other thoughts on process integration?

Frank: Again, the premise behind industrial IoT is indeed, as you're pointing out, connecting the consumer, the supplier, and the manufacturer. That’s why you have also the emergence of a low-power communication layer, like LoRa or Sigfox, that really can bring these millions of connected devices together and inject them into the systems that we're creating.

Hill: Just from the conversation, I know that we’re all really passionate about this. IoT and the industrial IoT is really just a great topic for us. It's so much bigger than what we're talking about. You've talked a little bit about security, you have asked us about the cloud, you have asked us about the integration of the inventory and to the production side, and it is so much bigger than what we are talking about now.

We probably could have twice this long of a conversation on any one of these topics and still never get halfway to the end of it. It's a really exciting place to be right now. And the really interesting thing that I think all of us are now realizing, the way that we have made advancements as a partnership as well is that you don't know what you don't know. A lot of companies are waking up to that as well, and we're using our collaborations to allow us to know what we don’t know

Frank: Which is why speed is so important. We can theorize and spend a lot of time in R&D, but the reality is, bring those systems to our customers, and we learn new use cases and new ways to make the technology advance.

Hill: The way that technology has gone, no one releases a product anymore -- that’s the finished piece, and that is going to stay there for 20, 30 years. That’s not what happens. Products and services are being provided that get constantly updated. How many times a week does your phone update with different pieces of firmware, the app is being updated. You have to be able to change and take the data that you get to adjust everything that’s going on. Otherwise you will not stay ahead of the market.

And that’s exactly what Phil described earlier when he was talking about whether you sell a product or a service that goes alongside a set of products. For me, one of the biggest things is that constant innovation -- where we are going. And we've changed. We were in kind of a linear motion of progression. In the last little while, we've seen a huge amount of exponential growth in these areas.

We had a video at the end of the London HPE Discover keynote, where it was one of HPE’s pieces of what the future could be. We looked at it and thought it was quite funny. There was an automated suitcase that would follow you after you left the airport. I started to laugh at that, but then I took a second and I realized that maybe that’s not as ridiculous as it sounds, because we as humans think linearly. That’s incumbent upon us. But if the technology is changing in an exponential way, that means that we physically cannot ignore some of the most ridiculous ideas that are out there, because that’s what’s going to change the industry.

And even by having that video there and by seeing what PTC is doing with the development that they have and what we ourselves are doing in trying out different industries and different applications, we see three companies that are constantly looking through what might happen next and are ready to pounce on that to take advantage of it, each with their own expertise.

Gardner: We're just about out of time, but I'd like to hear a couple of ridiculous examples -- pushing the envelope of what we can do with these sorts of technologies now. We don’t have much time, so less than a minute each, if you can each come up perhaps with one example, named or unnamed, that might have seemed ridiculous at the time, but in hindsight has proven to be quite beneficial and been productive. Phil?

McRell: You can do this as engineering with us, you can do this in service, but we've been talking a lot about manufacturing. In a manufacturing journey, the opportunity, as Gavin and Olivier are describing here, is at the level of what happened between pre- and post-electricity. How fast things will run, the quality at which they will produce products, and then therefore the business model that now you can have because of that capability. These are profound changes. You will see up-times in some of the largest factories in the world go up double digits. You will see lines run multiple times faster over time.

These are things that, if you just walked in today and walked in in a couple of years to some of the people who run the hardest, it would be really hard to believe what your eyes are seeing at that point, just like somebody who was around before factories had electricity would be astounded by what they see today.

Back to the Future

Gardner: One of the biggest issues at the most macro level in economics is the fact that productivity has plateaued for the past 10 or 15 years. People want to get back to what productivity was -- 3 or 4 percent a year. This sounds like it might be a big part of getting there. Olivier, an example?

Frank: Well, an example would be more like an impact on mankind and wealth for humanity. Think about that with those technologies combined with 3D printing, you can have new class of manufacturers anywhere in the world -- in Africa, for example. With real-time engineering, some of the concepts that we are demonstrating today, you have designing.

Another part of PTC is Computer-Aided Design (CAD) systems and Product Lifecycle Management (PLM), and we're showing real-time engineering on the floor again. You design those products and you do quick prototyping with your 3D printing. That could be anywhere in the world. And you have your users testing the real thing, understanding whether your engineering choices were relevant, if there are some differences between the digital model and the physical model, this digital twin ID.

Then, you're back to the drawing board. So, a new class of manufacturers that we don’t even know, serving customers across the world and creating wealth in areas that are (not) up to date, not industrialized.

Gardner: It's interesting that if you have a 3D printer you might not need to worry about inventory or supply chain.

Hill: Just to add on that one point, the bit that really, really excites me about where we are with technology, as a whole, not even just within the collaboration, you have 3D printing, you have the availability of open software. We all provide very software-centric products, stuff that you can adjust yourself, and that is the way of the future.

That means that among the changes that we see in the manufacturing industry, the next great idea could come from someone who has been in the production plant for 20 years, or it could come from Phil who works in the bank down the road, because at a really good price point, he has the access to that technology, and that is one of the coolest things that I can think about right now.

Where we've seen this sort of development and this use of these sort of technologies and implementations and seen a massive difference, look at someone like Duke Energy in the US. We worked with them before we realized where our capabilities were, never mind how we could implement a great solution with PTC and with HPE. Even there, based on our own technology, those guys in the para-production side of things in some legacy equipment decided to try and do this sort of application, to have predictive maintenance to be able to see what’s going on in their assets, which are across the continent.

They began this at the start of 2013 and they have seen savings of an estimated $50 billion up to this point. That’s a number.

Listen to the podcast. Find it on iTunes. Get the mobile appRead a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: