Friday, May 10, 2019

How Texmark Chemicals pursues analysis-rich, IoT-pervasive path to the ‘refinery of the future’


The next BriefingsDirect Voice of the Customer discussion revisits the drive to define the “refinery of the future” at Texmark Chemicals.

Texmark has been combining the best of operational technology (OT) with IT and now Internet of Things (IoT) to deliver data-driven insights that promote safety, efficiency, and unparalleled sustained operations.

Stay with us now as we hear how a team approach -- including the plant operators, consulting experts and latest in hybrid IT systems -- joins forces for rapid process and productivity optimization results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn how, are are joined by our panel, Linda Salinas, Vice President of Operations at Texmark Chemicals, Inc. in Galena Park, Texas; Stan Galanski, Senior Vice President of Customer Success at CB Technologies (CBT) in Houston, and Peter Moser, IoT and Artificial Intelligence (AI) Strategist at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Stan, what are the trends, technologies, and operational methods that have now come together to make implementing a refinery of the future approach possible? What’s driving you to be able to do things in ways that you hadn’t been able to do before?

Galanski
Galanski: I’m going to take that in parts, starting with the technologies. We have been exposed to an availability of affordable sensing devices. These are proliferating in the market these days. In addition, the ability to collect large amounts of data cheaply -- especially in the cloud -- having ubiquitous Wi-Fi, Bluetooth, and other communications have presented themselves as an opportunity to take advantage of.

On top of this, the advancement of AI and machine learning (ML) software -- often referred to as analytics -- has accelerated this opportunity.

Gardner: Linda, has this combination of events dramatically changed your perspective as VP of operations? How has this coalescing set of trends changed your life?

Salinas: They have really come at a good time for us. Our business, and specifically with Texmark, has morphed over the years to where our operators are more broadly skilled. We ask them to do more with less. They have to have a bigger picture as far as operating the plant.

Today’s operator is not just sitting at a control board running one unit. Neither is an operator just out in a unit, keeping an eye on one tower or one reactor. Our operators are now all over the plant operating the entire utilities and wastewater systems, for example, and they are doing their own lab analysis.
Learn More About Transforming
the Oil and Gas Industry
This technology has come at a time that provides information that’s plant-wide so that they can make more informed decisions on the board, in the lab, whenever they need.

Gardner: Peter, as somebody who is supplying some of these technologies, how do you see things changing? We used to have OT and IT as separate, not necessarily related. How have we been able to make those into a whole greater than the sum of their parts?

OT plus IT equals success 

Moser: That’s a great question, Dana, because one of the things that has been a challenge with automation of chemical plants is these two separate towers. You had OT very much separate from IT.

The key contributor to the success of this digitization project is the capability to reboot those two domains together successfully.

Gardner: Stan, as part of that partnership, tell us about CBT and how you fit.

Galanski: CBT is a 17-year-old, privately owned company. We cut our teeth early on by fulfilling high-tech procurement orders for the aerospace industry. During that period, we developed a strength for designing, testing, and installing compute and storage systems for those industries and vendors.

It evolved into developing an expertise in high-performance computing (HPC), software design platforms, and so forth.

About three years ago, we recognized the onset of faster computational platforms and massive amounts of data -- and the capability for software to control that dataflow -- was changing the landscape. Now, somebody needed to analyze that data faster over multiple mediums. Hence, we developed a practice around comprehensive data management and combined that with our field experience. That led us to become a systems integrator (SI), which is what we’ve been assigned to for this refinery of the future.

Gardner: Linda, before we explore more on what you’ve done and how it improves things, let’s learn about Texmark. With a large refinery operation, any downtime can be a big problem. Tell us about the company and what you are doing to improve your operations and physical infrastructure.

Salinas
Salinas: Texmark is a family-owned company, founded in 1970 by David Smith. And we do have a unique set of challenges. We sit on eight acres in Galena Park, and we are surrounded by a bulk liquid terminal facility.

So, as you can imagine, a plant that was built in the 1940s has older infrastructure. The layout is probably not as efficient as it could be. In the 1940s, we didn’t have a need for wastewater treatment. Things may not have been laid out in the most efficient ways, and so we have added these things over the years. So, one, we are landlocked, and, two, things may not be sited in the most optimal way.

For example, we have several control rooms sprinkled throughout the facility. But we have learned that siting is an important issue. So we’ve had to move our control room to the outskirt of the process areas.

As a result, we’ve had to reroute our control systems. We have to work with what we have, and that presents some infrastructure challenges.

Also, like other chemical plants and refineries, the things we handle are hazardous. They are flammable, toxic, and they are not things people want to have in the air that they breath in neighborhoods just a quarter-mile downwind of us.

So we have to be mindful of safe handling of those chemicals. We also have to be mindful that we don’t disrupt our processes. Finding the time to shut down to install and deploy new technology, is a challenge. Chemical plants and refineries need to find the right time to shut down and perform maintenance with a very defined scope, and on a budget.

https://texmark.com/
And so that capability to come up and down effectively is a strength for Texmark because we are a smaller facility and so are able to come up and down and deploy and test and prove out some of these technologies.

Gardner: Stan, in working with Linda, you are not just trying to gain incremental improvement. You are trying to define the next definition, if you will, of a safe, efficient, and operationally intelligent refinery.

How are you able to leapfrog to that next state, rather than take baby steps, to attain an optimized refinery?

Challenges of change 

Galanski: First we sat down with the customer and asked what the key functions and challenges they had in their operations. Once they gave us that list, we then looked at the landscape of technologies and the available categories of information that we had at our disposal and said, “How can we combine this to have a significant improvement and impact on your business?”

We came up with five solutions that we targeted and started working on in parallel. They have proven to be a handful of challenges -- especially working in a plant that’s continuously operational.
The connected worker solution is garnering a lot of attention in the marketplace. With it, we are able to bring real-time data from the core repositories of the company to the hands of the workers in the field.

Based on the feedback we’ve received from their personnel; we feel we are on the right track. As part of that, we are attacking predictive maintenance and analytics by sensoring some of their assets, their pumps. We are putting video analytics in place by capturing video scenes of various portions of the plant that are very restrictive but still need to have careful monitoring. We are looking at worker safety and security by capturing biometrics and geo-referencing the location of workers so we know they are safe or if they might be in danger.

The connected worker solution is garnering a lot of attention in the marketplace. With it, we are able to bring real-time data from the core repositories of the company to the hands of the workers in the field. Oftentimes it comes to them in a hands-free condition where the worker has wearables on his body that project and display the information without them having to hold a device.

Lastly, we are tying this all together with an asset management system that tracks every asset and ties them to every unstructured data file that has been recorded or captured. In doing so, we are able to put the plant together and combine it with a 3D model to keep track of every asset and make that useful for workers at any level of responsibility.

Gardner: It’s impressive, how this touches just about every aspect of what you’re doing.

Peter, tell us about the foundational technologies that accommodate what Stan has just described and also help overcome the challenges Linda described.

Foundation of the future refinery


Moser
Moser: Before I describe what the foundation consists of, it’s important to explain what led to the foundation in the first place. At Texmark, we wanted to sit down and think big. You go through the art of the possible, because most of us don’t know what we don’t know, right?

You bring in a cross-section of people from the plant and ask, “If you could do anything what would you do? And why would you do it?” You have that conversation first and it gives you a spectrum of possibilities, and then you prioritize that. Those prioritizations help you shape what the foundation should look like to satisfy all those needs.

That’s what led to the foundational technology platform that we have at Texmark. We look at the spectrum of use cases that Stan described and say, “Okay, now what’s necessary to support that spectrum of use cases?”

But we didn’t start by looking at use cases. We started first by looking at what we wanted to achieve as an overall business outcome. That led us to say, “First thing we do is build out pervasive connectivity.” That has to come first because if things can’t give you data, and you can’t capture that data, then you’re already at a deficit.

Then, once you can capture that data using pervasive Wi-Fi with HPE Aruba, you need a data center-class compute platform that’s able to deliver satisfactory computational capabilities and support, accelerators, and other things necessary to deliver the outcomes you are looking for.


The third thing you have to ask is, “Okay, where am I going to put all of this computing storage into?” So you need a localized storage environment that’s controlled and secure. That’s where we came up with the edge data center. It was those drivers that led to the foundation from which we are building out support for all of those use cases.

Gardner: Linda, what are you seeing from this marriage of modernized OT and IT and taking advantage of edge computing? Do you have an ability yet to measure and describe the business outcome benefits?

Hands-free data at your fingertips 

Salinas: This has been the perfect project for us to embark on our IT-OT journey with HPE and CBT, and all of our ecosystem partners. Number one, we’ve been having fun.

Two, we have been learning about what is possible and what this technology can do for us. When we visited the HPE Innovation Lab, we saw very quickly the application of IT and OT across other industries. But when we saw the sensored pump, that was our “aha moment.” That’s when we learned what IoT and its impact meant to Texmark.
Learn More About Transforming
the Oil and Gas Industry
As for key performance indicators (KPIs), we gather data and we learn more about how we can employ IoT across our business. What does that mean? That means moving away from the clipboard and spreadsheet toward having the data wherever we need it -- having it available at our fingertips, having the data do analytics for us, and telling us, “Okay, this is where you need to focus during your next precious turnaround time.”

The other thing is, this IoT project is helping us attract and retain talent. Right now it's a very competitive market. We just hired a couple of new operators, and I truly believe that the tipping point for them was that they had seen and heard about our IoT project and the “refinery of the future” goal. They found out about it when they Googled us prior to their interview.

We just hired a new maintenance manager who has a lot of IoT experience from other plants, and that new hire was intrigued by our “refinery of the future” project.

Finally, our modernization work is bringing in new business for Texmark. It's putting us on the map with other pioneers in the industry who are dipping their toe into the water of IoT. We are getting national and international recognition from other chemical plants and refineries that are looking to also do toll processing.

They are now seeking us out because of the competitive edge we can offer them, and for the additional data and automated processes that that brings to us. They want the capability to see real-time data, and have it do analytics for them. They want to be able to experiment in the IoT arena, too, but without having to do it necessarily inside their own perimeter.

Gardner: Linda, please explain what toll processing is and why it's a key opportunity for improvement?

Collaboration creates confidence

Salinas: Texmark produces dicyclopentadiene, butyl alcohol, propyl alcohol, and some aromatic solvents. But alongside the usual products we produce and sell, we also provide “toll processing services.” The analogy I like to tell my friends is, “We have the blender, and our customers bring the lime and tequila. The we make their margaritas for them.”

So our customers will bring to us their raw materials. They bring the process conditions, such as the temperatures, pressures, flows, and throughput. Then they say, “This is my material, this is my process. Will you run it in your equipment on our behalf?”
When we are able to add the IoT component to toll processing, when we are able to provide them data that they didn't have whenever they ran their own processes, that provides us a competitive edge over other toll processors.

When we are able to add the IoT component to toll processing, when we are able to provide them data that they didn't have whenever they ran their own processes, that provides us a competitive edge over other toll processors.

Gardner: And, of course, your optimization benefits can go right to the bottom line, so a very big business benefit when you learn quickly as you go.

Stan, tell us about the cultural collaboration element, both from the ecosystem provider team support side as well as getting people inside of a customer like Texmark to perhaps think differently and behave differently than they had in the past.

Galanski: It’s all about human behavior. If you are going to make progress in anything of this nature, you are going to have to understand the guy sitting across the table from you, or the person out in the plant who is working in some fairly challenging environments. Also, the folks sitting at the control room table with a lot of responsibility for managing the processes with lots of chemicals for many hours at a time.

So we sat down with them. We got introduced to them. We explained to them our credentials. We asked them to tell us about their job. We got to know them as people; they got to know us as people.

We established trust, and then we started saying, “We are here to help.” They started telling us their problems, asking, “Can you help me do this?” And we took some time, came up with some ideas, and came back and socialized those ideas with them. Then we started attacking the problem in little chunks of accomplishments.

https://www.cbtechinc.com/

We would say, “Well, what if we do this in the next two weeks and show you how this can be an asset for you?” And they said, “Great.” They liked the fact that there was quick turnaround time, that they could see responsivity. We got some feedback from them. We developed a little more confidence and trust between each other, and then more things started out-pouring a little at a time. We went from one department to another and pretty soon we began understanding and learning about all aspects of this chemical plant.

It didn’t happen overnight. It meant we had to be patient, because it’s an ongoing operation. We couldn't inject ourselves unnaturally. We had to be patient and take it in increments so we could actually demonstrate success.

And over time you sometimes can't tell the difference between us and some of their workers because we all come to meetings together. We talk, we collaborate, and we are one team -- and that’s how it worked.

Gardner: On the level of digital transformation -- when you look at the bigger picture, the strategic picture -- how far along are they at Texmark? What would be some of the next steps?

All systems go digital 

Galanski: They are now very far along in digital transformation. As I outlined earlier, they are utilizing quite a few technologies that are available -- and not leaving too many on the table.

So we have edge computing. We have very strong ubiquitous communication networks. We have software analytics able to analyze the data. They are using very advanced asset integrity applications to be able to determine where every piece, part, and element of the plant is located and how it’s functioning.

I have seen other companies where they have tried to take this only one chapter at a time, and they sometimes have multiple departments working on these independently. They are not necessarily ready to integrate or to scale it across the company.


But Texmark has taken a corporate approach, looking at holistic operations. All of their departments understand what’s going on in a systematic way. I believe they are ready to scale more easily than other companies once we get past this first phase.

Gardner: Linda, any thoughts about where you are and what that has set you up to go to next in terms of that holistic approach?

Salinas: I agree with Stan. From an operational standpoint, now that we have some sensored pumps for predictive analytics, we might sensor all of the pumps associated with any process, rather than just a single pump within that process.

That would mean in our next phase that we sensor another six or seven pumps, either for toll processing or our production processes. We won’t just do analytics on the single pump and its health, lifecycle, and when it needs to be repaired. Instead we look at the entire process and think, “Okay, not only will I need to take this one pump down for repair, but instead there are two or three that might need some service or maintenance in the next nine months. But the fuller analytics can tell me that if I can wait 12 months, then I can do them all at the same time and bring down the process and have a more efficient use of our downtime.”

I could see something like that happening.

Galanski: We have already seen growth in this area where the workers have seen us provide real-time data to them on hands-free mobile and wearable devices. They say, “Well, could you give me historical data over the past hour, week, or month? That would help me determine whether I have an immediate problem, not just one spike piece of information?”

So they have given us immediate feedback on that and that's progressing.

Gardner: Peter, we are hearing about a more granular approach to sensors at Texmark, with the IoT edge getting richer. That means more data being created, and more historical analysis of that data.

Are you therefore setting yourself up to be able to take advantage of things such as AI, ML, and the advanced automation and analytics that go hand in hand? Where can it go next in terms of applying intelligence in new ways?

Deep learning from abundant data

Moser: That’s a great question because the data growth is exponential. As more sensors are added, videos incorporated into their workflows, and they connect more of the workers and employees at Texmark their data and data traffic needs are going to grow exponentially.

But with that comes an opportunity. One is to better manage the data so they get value from it, because the data is not all the same or it’s not all created equal. So the opportunity there is around better data management, to get value from the data at its peak, and then manage that data cost effectively.

That massive amount of data is also going to allow us to better train the current models and create new ones. The more data you have, the better you can do ML and potentially deep learning.
Learn More About Transforming
the Oil and Gas Industry
Lastly, we need to think about new insights that we can’t create today. That's going to give us the greatest opportunity, when we take the data we have today and use it in new and creative ways to give us better insights, to make better decisions, and to increase health and safety. Now we can take all of the data from the sensors and videos and cross-correlate that with weather data, for example, and other types of data, such as supply chain data, and incorporate that into enabling and empowering the salespeople, to negotiate better contracts, et cetera.

So, again, the art of the possible starts to manifest itself as we get more and more data from more and more sources. I’m very excited about it.

Gardner: What advice do you have for those just beginning similar IoT projects?

Galanski: I recommend that they have somebody lead the group. You can try and flip through the catalogs and find the best vendors who have the best widgets and start talking to them and bring them on board. But that's not necessarily going to get you to an end game. You are going to have to step back, understand your customer, and come up with a holistic approach of how to assign responsibilities and specific tasks, and get that organized and scheduled.


There are a lot of parties and a lot of pieces on this chess table. Keeping them all moving in the right direction and at a cadence that people can handle is important. And I think having one contractor, or a department head in charge, is quite valuable.

Salinas: You should rent a party bus. And what I mean by that is when we first began our journey, actually our first lecture, our first step onto the learning curve about IoT, was when Texmark rented a party bus and put about 13 employees on it and we took a field trip to the HPE Innovation Lab.

When Doug Smith, our CEO, and I were invited to visit that lab we decided to bring a handful of employees to go see what this IoT thing was all about. That was the best thing we ever could have done, because the excitement was built from the beginning.
They saw, as we saw, the art of the possible at the HPE IoT lab, and the ride home on that bus was exciting. They had ideas. They didn't even know where to begin. The buy-in was there from the beginning.

They saw, as we say, the art of the possible at the HPE IoT lab, and the ride home on that bus was exciting. They had ideas. They didn’t even know where to begin, but they had ideas just from what they had seen and learned in a two-hour tour about what we could do at Texmark right away. So the engagement, the buy-in was there from the beginning, and I have to say that was probably one of the best moves we have made to ensure the success of this project.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise. 

You may also be interested in:

Wednesday, April 24, 2019

How the composable approach to IT aligns automation and intelligence to overcome mounting complexity


The next edition of the BriefingsDirect Voice of the Innovator podcast series explores the latest developments in hybrid IT and datacenter composability.

Bringing higher levels of automation to data center infrastructure has long been a priority for IT operators, but it's only been in the past few years that they have actually enjoyed truly workable solutions for composability.

The growing complexities, from hybrid cloud and the pressing need for conservation of IT spend -- as well as the need to find high-level IT skills -- means there is no going back. Indeed, there is little time for even a plateau on innovation around composability.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us now as we explore how pervasive increasingly intelligent IT automation and composability can be with Gary Thome, Vice President and Chief Technology Officer for Composable Cloud at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Gary, what are the top drivers making composability top-of-mind and something we’re going to see more of?

Thome: It’s the same drivers for businesses as a whole, and certainly for IT. First, almost every business is going through some sort of digital transformation. And that digital transformation is really about transforming to leverage IT to connect with their customers and make IT the primary way they interact with customers and make revenue.

Digital transformation drives composability 

Thome
With that, there’s a desire to go very fast, of rapidly getting connections to customers much faster and for adding features faster via software for your customers.

The whole idea of digital transformation and becoming a digital business is driving a whole new set of behaviors in the way enterprises run – and as a result – in the way that IT needs to support them.

From the IT standpoint, there is this huge driver to say, “Okay, I need to be able to go faster to keep up with the speed of the business.” That is a huge motivator.

But at the same time, there’s the constant desire to keep IT cost in line, which requires higher levels of automation. That automation -- along with a desire to flexibly align with the needs of the business -- drives what we call composability. It combines the flexibility of being able to configure and choose what you need to meet the business needs -- and ultimately customer needs -- and do it in a highly automated manner.

Gardner: Has the adoption of cloud computing models changed the understanding of how innovation takes place in an IT organization? There used to be long periods between upgrades or a new revision. Cloud has given us constant iterative improvements. Does composability help support that in more ways?

Thome: Yes, it does. There has been a general change in the way of thinking, of shifting from occasional, large changes to frequent, smaller changes. This came out of an Agile mindset and a DevOps environment. Interestingly enough, it’s permeated to lots of other places outside of IT. More companies are looking at how to behave that way in general.
How to Achieve Composability
Across Your Datacenter
On the technology side, the desire for rapid, smaller changes means a need for higher levels of automation. It means automating the changes to the next desired state as quickly as possible. All of those things lend themselves toward composability.

Gardner: At the same time, businesses are seeking economic benefits via reducing unutilized IT capacity. It’s become about “fit-for-purpose” and “minimum viable” infrastructure. Does composability fit into that, making an economic efficiency play?

Thome: Absolutely. Along with the small, iterative changes – of changing just what you need when you need it – comes a new mindset with how you approach capacity. Rather than buying massive amounts of capacity in bulk and then consuming it over time, you use capacity as you need it. No longer are there large amounts of stranded capacity.

Composability is key to this because it allows you through technical means to gain an environment that gets the desired economic result. You are simply using what you need when you need it, and then releasing it when it’s not needed -- versus pre-purchasing large amounts of capacity upfront.

Innovation building blocks

Gardner: As an innovator yourself, Gary, you must have had to rethink a lot of foundational premises when it comes to designing these systems. How did you change your thinking as an innovator to create new systems that accommodate these new and difficult requirements?

Thome: Anyone in an innovation role has to always challenge their own thinking, and say, “Okay, how do I want to think differently about this?” You can't necessarily look to the normal sources for inspiration because that's exactly where you don't want to be. You want to be somewhere else.

For myself it may mean looking at any other walk of life – from what I do, read, and learn as possible sources of inspiration for rethinking the problem.

Interestingly enough, there is a parallel in the IT world of taking applications and decomposing them into smaller chunks. We talk about microservices that can be quickly assembled into larger applications -- or composed, if you want to think of it that way. And now we’re able to disaggregate the infrastructure into elements, too, and then rapidly compose them into what's needed.

Those are really parallel ideas, going after the same goal. How do I just use what I need when I need it -- not more, not less? And then automate the connections between all of those services.

That, in turn, requires an interface that makes it very easy to assemble and disassemble things together -- and therefore very easy to produce the results you want.

When you look at things outside of the IT world, you can see patterns of it being easy to assemble and disassemble things, like children's building blocks. Before, IT tended to be too complex. How do you make the IT building blocks easier to assemble and disassemble such that it can be done more rapidly and more reliably?

Gardner: It sounds as if innovations from 30 years ago are finding their way into IT. Things such as simultaneous engineering, fit-for-purpose design and manufacturing, even sustainability issues of using no more than you need. Were any of those inspirations to you?

Cultivate the Agile mindset

Thome: There are a variety of sources, everything from engineering practices, to art, to business practices. They all start swiveling around in your head. How do I look at the patterns in other places and say, “Is that the right kind of pattern that we need to apply to an IT problem or not?”

The historical IT perspective of elongated steps and long development cycles led to the end-place of very complex integrations to get all the piece-parts put together. Now, the different, Agile mindset says, “Why don’t you create what you need iteratively but make sure it integrates together rapidly?”

Can you imagine trying to write a symphony and have 20 different people develop their own parts? There’s separate trombone, or timpani, or violin. And then you just say, “Okay, play it together once, and we will start debugging when it doesn’t sound right.” Well, of course that would be a disaster. If you don’t think about it upfront, do you want to develop it as-you-go?

The same thing needs to go into how we develop IT -- with both the infrastructure and applications. That’s where the Agile and the DevOps mindsets have evolved to. It’s also very much the mindset we have in how we develop composability within HPE.

Gardner: At HPE, you began bringing composability to servers and the data center stack, trying to make hardware behave more like software, essentially. But it’s grown past that. Could you give us a level-set of where we are right now when it comes to the capability to compose the support for doing digital business?

Intelligent, rapid, template-driven assembly 

Thome: Within the general category of composablity, we have this new thing called Composable Infrastructure, and we have a product called HPE Synergy. Rather than treat the physical data resources in the data center as discrete servers, storage arrays, switches, it looks at them as pools of compute capacity, storage capacity, fabric capacity, and even software capacity or images of what you want to use.

Each of those things can be assembled rapidly through what we call software-defined intelligence. It knows how to assemble the building blocks – compute, storage, and networking -- into something interesting. And that is template-driven. You have a template, which is a description of what you want the end-state to look like, what you want your infrastructure look like, when you are done.

https://www.hpe.com/us/en/home.html
And the templates say, “Well, I need a compute of this big block or size. This much storage, or this kind of network.” Whatever you want. “And then, by the way, I want this software loaded on it.” And so forth. You describe the whole thing as a template and then we can assemble it based on that description.

That approach is one we’ve innovated on in a lab from the infrastructure’s standpoint. But what’s very interesting about it is, if you look at a modern cloud made up of applications, it uses a very similar philosophical approach to the assembling. In fact, just like with modern applications, you say, “Well, I’m assembling a group of services or elements. I am going to create it all via APIs.” Well, guess what? Our hardware is driven by APIs also. It’s an API-level assembly of the hardware to compose the hardware into whatever you want. It’s the same idea of composing that applies everywhere.

Millennials lead the way

Gardner: The timing for this is auspicious on many levels. Just as you’re making crafting of hardware solutions possible, we’re dealing with an IT labor shortage. If, like many Millennials, you are of a cloud-first mentality you will find kinship with composability -- even though you’re not necessarily composing a cloud. Is that right?

Thome: Absolutely. That cloud mindset, or service’s mindset, or asset-service mindset -- whatever you want to think of it as – is one where this is a natural way of thinking. The younger people may have grown up with this mindset. It wouldn’t occur to them to think any differently. And others may have to shift to a new way of thinking.

This is one of the challenges for organizations. How do they shift -- not just the technologies or the tools -- but the mindset within the culture in a different direction?
How to Remove Complexity
From Multicloud and Hybrid IT
You have to start with changing the way you think. It’s a mindset change to ask, “How do I think about this problem differently?” That’s the key first thing that needs to happen, and then everything falls behind that mindset.

It’s a challenge for any company doing transformation, but it’s also true for innovation -- shifting the mindset.

Gardner: The wide applicability of composability is impressive. You could take this composable mindset, use these methods and tools, and you could compose a bare-metal, traditional, on-premises data center. You could compose a highly virtualized on-premises data center. You could compose a hybrid cloud, where you take advantage of private cloud and public cloud resources. You can compose across multiple types of private and public clouds.

Cross-cloud composability

Thome: We think composability is a very broad, useful idea. When we talk to customers they are like, “Okay, well, I’ll have my own kind of legacy estate, my legacy applications. Then I have my new applications, and new way of thinking that are being developed. How do I apply principles and technologies that are universal across them?”

The idea of being able to say, “Well, I can compose the infrastructure for my legacy apps and also compose my new cloud-native apps, and I get the right infrastructure underneath.” That is a very appealing idea.

But we also take the same ideas of composability and say, “Well, I would even want to compose ultimately across multiple clouds.” So more-and-more enterprises are leveraging clouds in various shapes and forms. They are increasing the number of clouds they use. We are trending to hybrid cloud, where there are people using different clouds for different reasons. They may actually have a single application that’s spanning multiple clouds, including on-premises clouds.

When you get to that level, you start thinking, “Well, how do I compose my environment or my applications across all of those areas?” Not everybody is necessarily thinking about it that way yet, but we certainly are. It’s definitely something that’s coming.
You start thinking, "How do I compose my environment or my applications across all areas?" Not everyone is thinking about it yet that way, but we certainly are. It's definitely coming.

Gardner: Providers are telling people that they can find automation and simplicity but the quid pro quo is that you have to do it all within a single stack, or you have to line up behind one particular technology or framework. Or, you have to put it all into one particular public cloud.

It seems to me that you may want to keep all of your options open and be future-proof in terms of what might be coming in a couple of years. What is it about composability that helps keep one’s options open?

Thome: With automation, there’s two extremes that people wind up with. One is a great automation framework that promises you can automate anything. The most important thing is that you can; meaning, we don’t do it, but you can, if you are willing to invest all of the hard work into it. That’s one approach. The good news is that there are multiple vendors with actual parts of the automation-technology total. But it can be a very large amount of work to develop and maintain systems across that kind of environment.

On the other hand, there are automation environments where, “Hey, it works great. It’s really simple. Oh, by the way, you have to completely stay within our environment.” And so you are stuck within the confines of their rules for doing things.

Both of these approaches, obviously, have a very significant downside because any one particular environment is not going to be the sum of everything that you do as a business. We see both of them as wrong.

Real composability shines when it spans the best of both of those extremes. On the one hand, composability makes it very easy to automate the composable infrastructure, and it also automates everything within it.

In the case of HPE Synergy, composable management (HPE OneView) makes it easy to automate the compute, storage, and networking -- and even the software stacks that run on it -- through a trivial interface. And at the same time, you want to integrate into the broader, multivendor automation environments so you can automate across all things.


You need that because, guaranteed, no one vendor is going to provide everything you want, which is the failing of the second approach I mentioned. Instead, what you want is to have a very easy way to integrate into all of those automation environments and automation frameworks without throwing a whole lot of work to the customer to do.

We see composability strength in being API-driven. It makes it easy to integrate into automation frameworks, but secondly, it completely automates the things that are underneath that composable environment. You don't have to do a lot of work to get things operating.

So we see that as the best of those two extremes that have historically been pushed on the market by various vendors.

Gardner: Gary, you have innovated and created broad composability. In a market full of other innovators, have there been surprises in what people have done with composability? Has there been follow-on innovation in how people use composability that is worth mentioning and was impressive to you? 

Success stories 

Thome: One of my goals for composability was that, in the end, people would use it in ways I never imagined. I figured, “If you do it right, if you create a great idea and a great toolset, then people can do things with it you can't imagine.” That was the exciting thing for me.

One customer created an environment where they used the HPE composable API in the Terraform environment. They were able to rapidly span a variety of different environments based on self-service mechanisms. Their scientist users actually created the IT environments they needed nearly instantly.

It was cool because it was not something that we set out specifically to do. Yet they were saying it solves business needs and their researchers’ needs in a very rapid manner.

Another customer recently said, “Well, we just need to roll out really large virtualization clusters.” In their case, it's a 36-node cluster. It used to take them 21 days. But when they shifted to HPE composability, they got it down to just six hours.
Obviously it’s very exciting to see such real benefits to customers, to get faster with putting IT resources to use and to minimize the burden on the people associated with getting things done.

When I hear those kinds of stories come back from customers -- directly or through other people -- it's really exciting. It says that we are bringing real value to people to help them solve both their IT needs and their business needs.

Gardner: You know you’re doing composable right when you have non-IT people able to create the environments they need to support their requirements, their apps, and their data. That's really impressive.

Gary, what else did you learn in the field from how people are employing composability? Any insights that you could share?

Thome: It's in varying degrees. Some people get very creative in doing things that we never dreamed of. For others, the mindset shift can be challenging, and they are just not ready to shift to a different way of thinking, for whatever reasons.

Gardner: Is it possible to consume composability in different ways? Can you buy into this at a tactical level and a strategic level?

Thome: That's one of the beautiful things about the HPE composability approach. The answer is absolutely, “Yes.” You can start by saying, “I’m going to use composability to do what I always did before.” And the great news is it's easier than what you had done before. We built it with the idea of assembling things together very easily. That's exactly what you needed.

Then, maybe later, some of the more creative things that you may want to do with composability come to mind. The great news is it's a way to get started, even if you haven’t yet shifted your thinking. It still gives you a platform to grow from should you need to in the future.

https://www.hpe.com/us/en/resources/composable-infrastructure-for-dummies.html
Gardner: We have often seen that those proof-points tactically can start the process to change peoples' mindsets, which allows for larger, strategic value to come about.

Thome: Absolutely. Exactly right. Yes.

Gardner: There’s also now at HPE, and with others, a shift in thinking about how to buy and pay for IT. The older ways of IT, with longer revisions and forklift upgrades meant paying was capital-intensive.

What is it about the new IT economics, such as HPE GreenLake Flex Capacity purchasing, that align well with composability in terms of making it predictable and able to spread out costs as operating expenses?

Thome: These two approaches are perfect together; they really are. They are hand-in-glove and best buddies. You can move to the new mindset of, “Let me just use what I need and then stop using it when I don't need it.”

That mindset -- and being able to do rapid, small changes in capacity or code or whatever you are doing, it doesn’t matter – also allows a new economic perspective. And that is, “I only pay for what I need, when I need it; and I don't pay for the things I am not using.”

Our HPE GreenLake Flex Capacity service brings that mindset to the economic side as well. We see many customers choose composability technology and then marry it with GreenLake Flex Capacity as the economic model. They can bring together that mindset of making minor changes when needed, and only consuming what is needed, to both the technical and the economic side.

We see this as a very compelling and complementary set of capabilities -- and our customers do as well.

Gardner: We are also mindful nowadays, Gary, about the edge computing and the Internet of Things (IoT), with more data points and more sensors. We also are thinking about how to make better architectural decisions about edge-to-core relationships. How do we position the right amount of workload in the right place for the right requirements?

How does composability fit into the edge? Can there also be an intelligent fabric network impact here? Unpack for us how the edge and the intelligent network foster more composability.

Composability on the fly, give it a try 

Thome: I will start with the fabric. So the fabric wants to be composable. From a technology side, you want a fabric that allows you to say, “Okay, I want to very dynamically and easily assemble the network connections I want and the bandwidth I want between two endpoints -- when I want them. And then I want to reconfigure or compose, if you will, on the fly.”

We have put this technology together, and we call it Composable Fabric. I find this super exciting because you can create a mesh simply by connecting the endpoints together. After that, you can reconfigure it on the fly, and the network meets the needs of the applications the instant you need them.
How to Achieve Composability
Across Your Datacenter
This is the ultimate of composability, brought to the network. It also simplifies the management operation of the network because it is completely driven by the need from the application. That is what directly drives and controls the behavior of the network, rather than having a long list of complex changes that need to be implemented in the network. That tends to be cumbersome and winds up being unresponsive to the real needs of the business. Those changes take too long. This is completely driven from the needs of application down into the needs of the fabric. It’s a super exciting idea, and we are really big on it, obviously.

Now, the edge is also interesting because we have been talking about conserving resources. There are even fewer resources at the edge, so conservation can be even more important. You only want to use what you need, when you need it. Being able to make those changes incrementally, when you need them, is the same idea as the composability we have been talking about. It applies to the edge as well. We see the edge as ultimately an important part of what we do from a composable standpoint.


Gardner: For those folks interested in exploring more about composability, methodologies, technologies, and getting some APIs to experiment with -- what advise do you have for them? What are some good ways to unpack this and move into a proof-of-concept project?

Thome: We have a lot of information on our website, obviously, about composability. There is a lot you can read up on, and we encourage anybody to learn about composability through those materials.

They can also try composability because it is completely software-defined and API-driven. You can go in and play with the composable concepts through software. We suggest people try directly. But they can also go and connect it to their automation tools and see how they can compose things via the automation tools they might already be using for other purposes. It can then extend into all things composable as well.

I definitely encourage people to learn more, but specially to move into the “doing phase.” Just try it out, see how easy it is to get things done.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: