Wednesday, January 9, 2019

How global HCM provider ADP mines an ocean of employee data for improved talent management

The next BriefingsDirect big data analytics and artificial intelligence (AI) strategies discussion explores how human capital management (HCM) services provider ADP unlocks new business insights from vast data resources.

With more than 40 million employee records to both protect and mine, ADP is in a unique position to leverage its business data network for unprecedented intelligence on employee trends, risks, and productivity. ADP is entering a bold new era in talent management by deploying advanced infrastructure to support data assimilation and refinement of a vast, secure data lake as foundations for machine learning (ML).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Unpack how advances in infrastructure, data access, and AI combine to produce a step-change in human capital analytics with panelists Marc Rind, Vice President of Product Development and Chief Data Scientist at ADP Analytics and Big Data, and Dr. Eng Lim Goh, Vice President and Chief Technology Officer for High Performance Computing and Artificial Intelligence at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.



Here are some excerpts:

Gardner: Marc, what's unique about this point in time that allows organizations such as ADP to begin to do entirely new and powerful things with its vast data?

Rind: What’s changed today is the capability to take data -- and not just data that you originally collect for a certain purpose, I am talking about the “data exhaust” -- and to start using that data for purposes that are not the original intention you had when you started collecting it.

Rind
We pay one in six full-time employees in the US, so you can imagine the data that we have around the country, and around the world of work. But it's not just data about how they get paid -- it's how they are structured, what kind of teams are they in, advances, bonuses, the types of hours that they work, and everything across the talent landscape. It's data that we have been able to collect, curate, normalize, and then aggregate and anonymize to start leveraging to build some truly fascinating insights that our clients are able to leverage.

Gardner: It's been astonishing to me that companies like yours are now saying they want all of the data they can get their hands on -- not just structured data, but all kinds of content, and bringing in third-party data. It's really “the more, the merrier” when it comes to the capability to gather entirely new insights.

The vision of data insight

Rind: Yes, absolutely. Also there have been advances in methodologies to handle this data -- like you said, unstructured data, non-normalized data, taking data from across hundreds of thousands of our clients, all having their own way that they define, categorize, and classify their workforces.

Learn How IT Best Supports 

The Greatest Data Challenges

Now we are able to make sense of all of that -- across the board -- by using various approaches to normalize, so that we can start building insights across the board. That’s something extremely exciting for us to be able to leverage.

Gardner: Dr. Goh, it's only been recently that we have been able to handle such vast amounts of data in a simplified way and at a manageable cost. What are partners like HPE bringing to the table to support these data platforms and approaches that enable organizations like ADP to make analytics actionable?

Goh: As Marc mentioned, these are massive amounts of data, not just the data you intend to keep, but also the data exhaust. He also mentioned the need to curate it. So the idea for us in terms of data strategy with our partners and customers is, one, to retain data as much as you can.

Goh
Secondly, we ensure that you have the tools to curate it, because there is no point having massive amounts of data over decades – and when you need them to train a machine –  and you don’t know where all of the data is. You need to curate it from the beginning, and if you have not, start curating your data now.

The third area is to federate. So retain, curate, and federate. Why is the third part, to federate, important? As many huge enterprises evolve and grow, a lot of the data starts to get siloed. Marc mentioned a data lake. This is one way to federate, whereby you can cut across the silos so that you can train the machine more intelligently.

We at HPE build the tools to provide for the retention, curation, and federation of all of that data.

Gardner: Is this something you are seeing in many different industries? Where are people leveraging ML, AI, and this new powerful infrastructure? 

Goh: It all begins with what I call the shift. The use of these technologies emerged when industries shifted from when prediction decisions were made using rules and scientific law-based models.

Then came a recent reemergence of ML, where instead of being based on laws and rules, you evolve your model more from historical data. So data becomes important here, because the intelligence of your model is dependent on the quantity and quality of the data you have. And by using this approach you are seeing many new use cases emerge, of using the ML approach on historical data.

One example would be farming. Instead of spraying the entire crop field, they just squirt specifically at the weeds and avoid the crops.

Gardner: This powerful ML example is specific to a vertical industry, but talent management insights can be used by almost any business. Marc, what has been the challenge to generate talent management insights based on historical data?

Rind: It’s fascinating because Dr. Goh’s example pertains to talent management, too. Everyone that we work with in the HCM space is looking to gain an advantage when it comes to finding, keeping, and retaining their best talent.

We look at a vast amount of employment data. From that, we can identify people who ended up leaving an organization voluntarily versus those who stayed and grew, why they were able to grow, based on new opportunities, promotions, different methods of work, and by being on different teams. Similar to the agriculture example, we have been able to use the historical data to find patterns, and then identify those who are the “crops” and determine what to do to keep them happier for longer retention.

It’s fascinating because Dr. Goh’s example pertains to talent management, too. Everyone that we work with in the HCM space is looking to gain an advantage when it comes to finding, keeping, and retaining their best talent.
This is a big shift in the talent management space. We are leveraging vast data -- but not presenting too much data to an HCM professional. We spend a lot of time handling it on their behalf so the HCM professional and even managers can have the insights pushed to them, rather than be bombarded with too much data.

At the end of the day, we are using AI to say, “Hey, here are the people you should go speak with. Or this manager has a lot of high-risk employees. Or this is a critical job role that you might see higher than expected turnover with.” We can point the managers in that direction and allow them to figure out what to do about it. And that's a big shift in simplifying analysis, and at the same time keeping the data secure.

Data that directs, doesn’t distract 

Goh: What Marc described is very similar to what our customers are doing by converting their call center voice recordings into text. They then anonymize it but gain the ability to figure out the sentiment of their customers.

The sentiment analysis of the text -- after converting from a voice recording – helps them better understand churn. In the telco industry, for example, they are very concerned about churn, which means a customer leaving you for another vendor.

Yes, it’s very similar. First you go through a massive amount of historical data, and then use smart tools to convert the data to make it useable, and then a different set of tools analyzes it all -- to gain such insights as the sentiment of your customers.

Gardner: When I began recording use case discussions around big data, AI, and ML, I would talk to organizations like refineries or chemical plants. They were delighted if they could gain a half-percent or a full percent of improvement. That alone meant billions of dollars to them.

But you all are talking about the high-impact improvement for employees and talent. It seems to me that this isn’t just shaving off a rounding number of improvement. Marc, this type of analysis can make or break a company's future.

So let's look at the stakes here. When we talk about improving talent management, this isn’t trivial. This could mean major improvement for any cdanStaveMen66ompany.

Learn How IT Best Supports 

The Greatest Data Challenges

Rind: Every company. Any leader of an organization will tell you that their most important resource is the people that work for the company. And that value is not an easy thing to measure.

We are not talking about how much more we can save on our materials, or how to be smarter in electricity savings. You are talking about people. At the end of the day, they are not a resource as much as they are human beings. You want to figure out what makes them tick, gain insight into where people need to be growing, and where you should spend the human time with them.

Where the AI comes in is to provide that direction and offer suggestions and recommendations on how to keep those people there, happy and productive.

Another part of keeping people productive is in automating the processes necessary for managers. We still have a lot of users punching clocks, managing time, and approving pay cards and processing payroll. And there are a lot of manual things that go on and there is still a lot of paperwork

We are using AI to simplify and make recommendations to handle a lot of those pieces, so the HR professional can be focused on the human part -- to help grow careers rather than be stuck processing paperwork and running reports.

Cost-effective AI, ML has arrived 

Gardner: We’re now seeing AI and ML have a major impact on one of the most important resources and assets a company can have, human capital. At the same time, we’re seeing the cost and complexity of IT infrastructure that support AI go down thanks to things like hyperconverged infrastructure (HCI), lower cost of storage, capability to create whole data centers that can be mirrored, backed up, and protected -- as well as ongoing improvements in composable infrastructure.

Are we at the point where the benefits of ML and AI are going up while the cost and composability of the underlying infrastructure are going down?

Goh: Absolutely. That’s the reason we have a reemergence of AI through machine learning of historical data. These methods were already available decades ago, but the infrastructure was just too costly to amass enough data for the machine to be intelligent. You just couldn’t get enough compute power to go through that data for the machine to be intelligent. It wasn’t until now that the various infrastructure required came down in cost, and therefore you see this reemergence of ML.

If one were to ask why in the last few years there has been a surge to AI, it would be lower cost of compute capability. We have reached a certain point where it is cost-effective enough to amass the data. Also because of the Internet, the data has become more easily accessible in the last few years.

Gardner: Marc, please tell us about ADP. People might be familiar with your brand through payroll processing, but there's a lot more to it.

Find, manage, and keep talent 

Rind: At ADP, or Automatic Data Processing, data is our middle name. We’ve been working at a global scale for 70 years, now with $12 billion in revenue and supporting over 600,000 businesses -- ranging from multinational corporations to three-person small businesses. We process $2 trillion in payroll and taxes, running about 40 million employee records per month. The amount of data we have been collecting is across the board, not just payroll.

Talent management is a huge thing now in the world of work -- to find and keep the best resources. Moving forward, there is a need to understand innovative engagement of that workforce, to understand the new world of pay and micro-pay, and new models where people are paid almost immediately.

The contingent workforce means a work market where people are moving away from traditional jobs. So there are lots of different areas within the world of payroll processing and talent management. It has really gotten exciting.

This could mean major improvement for any company. Where the artificial intelligence comes in is to provide that direction and offer suggestions and recommendations on how to keep those people there happy and productive.
With all of this -- optimizing your workforce – also brings better understanding of where to save the organization from lost dollars. Because of the amounts of data, we can inform a client not just on, “Okay, this is what your cost of turnover is based on who is leaving and how long it takes them to get productive again, and the cost of recruiting.”

We can also show how your HCM compares against others in your field. It's one thing to share some information. It’s another to give an insight on how others have figured this out or are handling this better. You gain the potential to save more by learning about other methods out there that you should explore to improve talent retention.

Once you begin generating cost savings for an organization -- be it in identifying people who are leaving, getting them on-boarded better, or reducing cost from overtime – it shows the power of the insights and of having that kind of data. And that’s not just about your own organization, but it’s in how you compare to your peers.

So that’s very exciting for us.

All-access data analytics

Goh: Yes, we are very keen to get such reports on intelligence with regards to our talent. It’s become very difficult to hire and retain data scientists focused on ML and AI. These reports can be helpful in hiring and to understand if they are satisfied in their jobs.

Rind: That’s where we see the future of work, and the future of pay, going. We have the organization, the clients, and the managers -- but at the end, it’s also data insights for the employees. We are in a new world of transparency around data. People understand more, they are more accepting of information as long as they are not bombarded with it.

As an employee, your partner in your career growth and your happiness at work is your employer. That’s the best partnership, where the employer understands how to put you into the right place to be more productive and knows what makes you tick. There should be understanding of the employees’ strengths, to make sure they use those strengths every day, and anticipate what makes them happier and more productive employees.

Those conversations start to happen because of the data transparency. It’s really very exciting. We think this data is going to help guide the employees, managers, and human resources (HR) professionals across the organizations.

Learn How IT Best Supports 

The Greatest Data Challenges

Gardner: ADP is now in a position where your value-added analysis services are at the level where boards of directors and C-suite executives will be getting the insights. Did that require a rethinking of ADP’s role and philosophy?

Rind: Through our journey we discovered that providing insights to the HR professional is one thing. But we realized that to fully unleash and unlock the value in the data, we needed to get it into the hands of the managers and executives in the C-suite.

And the best way to do that was to build ADP’s mobile app. It’s been in the top three of the most downloaded applications from the business section on the iTunes Store. People initially got this application to check their paystub and manage their deductions, et cetera. But now, that application is starting to push up to the managers, to the executives, insights about their organization and what's going on.

A key part was to understand the management persona. They are busy running their organizations, and they don’t have the time to pore through the data like a data scientist might to find the insights.

So we built our engine to find and highlight the most important critical data points based on their statistical significance. Do you have an outlier? Are you in the bottom 10 percent as an organization in such areas as new hire attrition? Finding those insights and pushing them to the manager and executive gets them these headlines.

Next, as they interact with the application, we gain intelligence about what's important to that manager and executive. We can then then push out the insights related to what's most important to them. And that's where we see these value-added services going. An executive is going to care about some things differently than a supervisor or a line manager might.

We can generate the insights based on their own data when they need it through the application versus them having to go in and get it. I think that push model is a big win for us, and we are seeing a lot of excitement from our clients as they are start using the app.

Gardner: Dr. Goh, are you seeing other companies extend their business models and rethinking who and what they are due to these new analytics opportunities?

Data makes all the difference

Goh: Yes, yes, absolutely. The industry has shifted from one where your differentiated asset was your method and filed patent, to one where your differentiated asset is the data. Data becomes your defensible asset, because from that data you can build intelligent systems to make better decisions and better predictions. So you see that trend.

In order for this trend to continue, the infrastructure must be there to continually reduce cost, so you can handle the growing amounts of data and not have the cost become unmanageable. This is why HPE has gone with the edge-to-cloud hybrid approach, where the customer can implement this amassing of data in a curated and federated way. They can handle it in the most cost-effective way, depending on their operating or capital budgets.

Gardner: Marc, you have elevated your brand and value through trends analysis around pay equity or turnover trends, and gaining more executive insights around talent management. But that wouldn't have been possible unless you were able to gain the right technology.

What do you have under the hood? And what choices have you made to support this at the best cost?

Rind: We build everything in our own development shop. We collect all the data on our Cloudera [big data lake] platform. We use various frameworks to build the insights and then push those applications out through our ADP Data Cloud.

We have everything open via a RESTful API, so those insights can permeate throughout the entire ADP ecosystem -- everyone from a practitioner getting insights as they on-board a new employee and on out to the recruiting process. So having that open API is a critical part of all of this.

Gardner: Dr. Goh, one of the things I have seen in the market is that the investments that companies like ADP make in the infrastructure to support big data analytics and AI sets in motion a virtuous adoption benefit. The investments to process the data leads to an improvement in analytics, which then brings in more interest in consumption of those analytics, which leads to the need for more data and more analytics.

It seems to me like it’s a gift that keeps giving and it grows in value over time.

Steps in the data journey 

Goh: We group our customers on this AI journey into three different groups: Early, started, and advanced. About 70 percent of our customers are in the early phase, about 20 percent in the started phase, where they have already started on the project, and about 10 percent are in the advanced phase.

The advanced-phase customers are like the automotive customers who are already on autonomous vehicles but would like us to come in and help them with infrastructure to deal with the massive amounts of data.

But the majority of our customers are in the early phase. When we engage with them, the immediate discussion is about how to get started. For example, “Let’s pick a low-hanging fruit that has an outcome that’s measurable; that would be interesting.”

We work with the customer to decide on an outcome to aim for, for the ML project. Then we talk about gaining access to the data. Do they have sufficient data? If so, does it take a long time to clean it out and normalize it, so you can consume it?

After that phase, we start a proof of concept (POC) for that low-hanging fruit outcome -- and hopefully it turns out well. From there the early customer can approach their management for solid funding to get them started on an operational project.

We are using AI to simplify and make recommendations to handle a lot of those pieces, so the HR professional can be focused on the human part -- to help grow careers rather than be stuck processing paperwork and running reports.
That’s typically how we do it. It always starts with the outcome, and what we are aiming for this machine to be trained at. Once they have gone through the learning phase, what is it they are trying to achieve, and would that achievement be meaningful for the company? A low-hanging fruit POC doesn’t have to be that complex.

Gardner: Marc, any words of wisdom looking back with 20/20 hindsight? When it comes to the investments around big data lakes, AI, and analytics, what would you tell those just getting started?

Rind: Much to Dr. Goh’s point, picking a manageable project is a very important idea. Go for something that is tangible, and that you have the data for. It's always important to get a win instead of boiling the ocean, to prove value upfront.

A lot of large organizations -- instead of building data lakes, they end up with a bunch of data puddles. Large companies can suffer from different groups building their own.

We have committed to localizing all of the data into a single data lake. The reason is that you can quickly connect data that you would never have thought to connect before. So understanding what the sales and the service process is, and how that might impact or inform the product or vice versa, is only possible if you start putting all of your data together. Once you get it together, just work on connecting it up. That's key to opening up the value across your organization. 

Connecting the data dots 

Goh: It helps you connect more dots.

Gardner: The common denominator here is that there is going to be more and more data. We’re starting to see the Internet of Things (IoT) and the Industrial Internet of Things (IIoT) bring in even more data.

Even relevant to talent management, there are more ways of gathering even more data about what people are doing, how they are working, what their efficacy is in the field, especially across organizational boundaries like contingent workforces, being able to measure what they are doing and then pay them accordingly.

Marc, do you see ever more data coming online to then need to be measured about how people work?

Rind: Absolutely! There is no way around it. There are still a lot of disconnected points of data, for sure. The connection points are going to just continue to be made possible, so you get a 360-degree view of the world at work. From that you can understand better how they are working, how to make them more productive and engaged, and bringing flexibility to allow them to work the way they want. But only by connecting up data across the board and pulling it all together would that be possible.

Learn How IT Best Supports 

The Greatest Data Challenges

Gardner: We haven’t even scratched the surface of incentivization trends. The more data allows you to incentivize people on a micro basis in near-real time, is such an interesting new chapter. We will have to wait for another day, another podcast, to get into all of that.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Wednesday, December 12, 2018

Inside story: How HP Inc. moved from a rigid legacy to data center transformation

The Next BriefingsDirect data center architecture modernization journey interview explores how HP Inc. (HPI) has rapidly separated and modernized a set of data centers as part of its splitting off from what has become Hewlett Packard Enterprise (HPE).

We will now learn how HP Inc. has taken four shared data centers and transitioned to two agile ones, with higher performance, lower costs, and an obsolescence-resistant and strategic infrastructure design.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us define the data center of the future are Sharon Bottome, Vice President and Head of Infrastructure Services at HPI, and Piyush Agarwal, Senior Director of Infrastructure Services, also at HPI. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. 

Here are some excerpts:

Gardner: We know the story of HP Inc. splitting off into a separate company from HPE in 2015. Yet, it remains unusual. Most IT modernization efforts combine -- or at the least replicate -- data centers. You had to split off and modernize your massive infrastructures at the same time, and you are still in the process of doing that.

Sharon, what have been the guiding principles as you created new IT choices from a combined corporate legacy? 

Bottome: When the split happened, leadership had to make a lot of decisions around speed and agility just to get the split done. A new underlying IT infrastructure wasn’t necessarily the key decision maker for how the split went.

Bottome
We therefore ended up on shared infrastructure in four data centers, which then ended up being shared again as HPE split off assets to Micro Focus and DXC Technology in 2017. We ended up in a situation of having four data centers with shared infrastructure across four newly separated companies.

As you can imagine, we have a different imperative now that we are a new and separate company. HPI is very aggressive and wants to be very fast and agile. So we really need to continue and finish what was an initial separation of all of the infrastructure.

Gardner: Is it fair to say, Piyush, that this has been an unprecedented affair at such scale and complexity?

Agarwal: Yes, that is true. If you look at what some of the other organizations and companies have done, there have been a $5 billion and $10 billion company that have undertaken such data center transformations. But the old Hewlett-Packard as a joint company was a $100 billion company, so separating the data centers for a $100 billion company is a huge effort.

So, yes, companies have done this in the past, but the amount of time they had -- versus the amount of time we are seeking to do the separation makes this totally unthinkable. We are still on that journey.


Gardner: What is new in 2018 IT that allows you to more aggressively go at something like this? What has helped you to do this that was not available just a few years ago?

Bottome: First, the driver for us is we really want to be independent. We want to truly transform our services. That means it's much more about the experiences -- and not just the technology.

We have embarked predominantly on HPE gear. We architected the new data centers using the newest technologies, whether it’s 3PAR, HPE Synergy, and some of the other hardware. That allows us to take about 800 applications and 22,000 operating systems instances and migrate those. It's just a huge undertaking.
Learn How the Future
of Hybrid IT
Can Be Made Simple
But by using a lot of the new technology and hardware, we have to then transform our own processes and all the services to go along with that.

Gardner: Piyush, what have you learned in terms of the underlying architecture? One of my favorite sayings is, “Architecture is destiny.” If you make the right architecture decisions, many other things then fall into place.

What have you done on an architectural level that's allowed this to go more smoothly?

Simpler separation solutions

Agarwal: It’s more about a philosophy than just an architecture, in my view. It goes to the previous question you asked. Why is it simpler now? Just after the separation, there was a philosophy around going to public cloud. Everybody thought that we would save a lot of money by just going to the public cloud.

But in the last two or three years, we realized that the total cost of ownership (TCO) in a public cloud – especially if the applications are not architected for public cloud – means we are not going to save much. So based on that that epiphany, we said, “Hey, is it the right time to look at our enterprise data center and architect it in such a way that it provides cloud-like functionality and still offers flexibility in terms of how much we pay?”

Having HPE Synergy as the underlying composable infrastructure really helps with all of that. Obviously, the newer software-defined data center (SDDC) architectures are also playing a major role. So now, where the application is hosted is less of a concern, because -- thanks to the software-defined architecture and best-fit model -- we may be able to move the workloads around over time.

Gardner: Where you are on this journey? How does that extend around the world?

Multicloud, multinational

Bottome: We are going from four data centers in Texas -- two in Austin and two in Houston – down to two, one each in Houston and Plano. We are deploying those two with full resiliency, redundancy, and disaster recovery.

Gardner: And how does that play into your global reach? How are you using hybrid IT to bring these applications to your global workforce?

Bottome: Anyone who says they are not in a multicloud environment is certainly fooling themselves. We basically are already in a multicloud environment. We have many, many platforms in other people’s clouds in addition to our core data centers. We also have, obviously, our customer resource management (CRM) as a cloud service, and we are moving our enterprise resource planning (ERP) into another cloud.

How do we support all of these cloud environments? We have partners along with us. We are very much out-sourced, too.
So it's a multicloud environment and managing that and changing operations to be able to support that is one of the things we are doing with this transformation. How do we support all of these cloud environments? We have partners along with us. We are using managed service providers (MSPs). We are very much outsourced, too. So it's a journey with them on learning how to have them all supported across all of these multiple clouds.

Ticketing transformed

Gardner: You mentioned management as being so important. Piyush, when it comes to some of the newer management capabilities we are hearing about – such as HPE OneSphere -- what have you learned along the journey so far? Do both HPE OneView and HPE OneSphere play a role as a continuum?

Agarwal: It’s difficult to get into the technology of OneView versus OneSphere. But the predictive analytics that every provider uses to support us is remarkably different, even in just the last five years.

When we were going through this request for proposal (RFP) process for MSPs for our new data center transformation and services, every provider was showing us the software and intelligence on how tickets can be closed -- even before the tickets are generated.

So that’s a huge leap from what we saw four or five years ago. Back then the cost of play was about being in a low-cost location because employee costs were 80 percent of the total. But new automation and intelligence into the ticketing systems is a way to move forward. That’s what will drive the service efficiencies and cost reductions.

Gardner: Sharon, as you continue on your transformation journey, are you able to do more for less?

Bottome: This is actually a great success story for us. In the new data center transformation and the services transformation RFP that Piyush was mentioning, we actually are getting $50 million a year in savings every year over five years. That’s allowed us, obviously, to reinvest that money in other areas. So, yes, it's been a great success story.


We are transforming a lot of the services -- not just in the data center. It's also about how our user base will experience interacting with IT as we move to more of these management platforms with this transformation.

Gardner: How will this all help your IT operations people to be more efficient?

IT our way, with our employees 

Agarwal: When we talk about IT services, there is always a pendulum. If you go back 15 or 20 years, there used to be articles about how Solectron moved all of their IT to IBM. In 2001, there were so many of those kinds of deals.

But within one to two years people realized how difficult it was. The success of the businesses depended not just on IT outsourcing, but in keeping the critical talent to manage the business expectations and manage the service providers.

Where we are now with HPI, over the period of the last three years, we have learned how to work in a managed services environment. What that means is how to get the best out of a supplier but still maintain the critical knowledge of the environment within our own IT.
Learn How the Future
of Hybrid IT
Can Be Made Simple
Our own employees can therefore run the IT tomorrow on some other service provider, if we so choose. It maintains the healthy mix of relationships between the suppliers and our employees. So, we haven’t gone too far right or too far left in terms of how the IT should be run from a service provider perspective.

With this transformation, that thought process was reinforced. We realized when we began this transformation process that we didn’t yet have critical mass to run our IT services internally. Over the period of the last one-and-a-half years, we have gained that critical mass back.

From an HPI IT operations team’s perspective, it generates confidence back -- versus having a victim mentality of, “Oh, it’s a supplier and the suppliers are going to do it,” that is opposed to having the confidence ourselves to deliver on that accountability with our own IT employees. They are the ones driving our supplier to do the transformation, and to do the operations afterward.

Gardner: We have also seen an increase in automation, orchestration, and some very powerful tools, many of them data-driven. How have automation techniques helped you in this process of regaining and keeping control?

Automation advantages 

Agarwal: DevOps provides, on the one hand, the overall infrastructure, orchestration, and agility to provision. Being part of the previous Hewlett Packard Company, we always had the latest and greatest of those tools. We were a testing ground for those tools. We always relied on automated ways of provisioning, and for quick provisioning.

If I look at that from a transformation perspective, we will continue to use those orchestration and provisioning tools. Our internal cloud is heavily reliant on such cloud service automation (CSA). For other technologies, we rely on server automation for all of the Linux and Unix platforms. We always have that mix of quick provisioning.

At the same time, we will continue to encourage our developers to encompass these infrastructure technologies in their DevOps models. We are not there yet, where the application tier integrates with the infrastructure tier to provide a true DevOps model, but I think we are going to see it in the next one to two years.

Gardner: Is there a rationalization process for your data? What’s the underlying data transformation story that’s a subset of the general data center modernization story?

Application rationalization remains an ongoing exercise for us. In a true sense, we had 1,200 applications. We are bringing that down to 800. The application and data center transformations are going in parallel.
Agarwal: Our CIO was considered one of the most transformative in 2015. There is a Forbes article on it. As part of 2015 separation, we undertook a couple of transformation journeys. The data center transformation was one, but the other one was the application transformation. Sharon mentioned that for our CRM application, we moved to Microsoft Dynamics. We are consolidating our ERP.

Application rationalization (AR) remains an ongoing exercise for us. In a true sense, we had 1,200 to 1,300 applications. We are trying to bring that down to 800. Then, there is a further reduction plan over the next two to three years. Certainly the application and data center transformations are going in parallel.

But from a data perspective -- looking at data in general or of having data totally segregated from the applications layer -- I don’t think we are doing that yet.

Where we are in the overall journey of applications transformation, with the number of applications we have, in my view, the data and segregation of applications is at a much higher level of efficiency. Once we have data center transformation and consolidated applications and reduce those by as many as possible, then we will take a look at segregating the data layer from the applications layer.

Gardner: When you do this all properly, what other paybacks do you get? What have been some of the unexpected benefits?

Getting engaged 

Bottome: We received great financial benefits, as I mentioned. But some of the other areas include the end-user experience. Whether it’s time-to-fix by improving the experience of our employees interacting with IT support, we’re seeing efficiencies there with automation. And we are going to bring a lot more efficiency to our own teams.

And one of the measurements that we have internally is an employee satisfaction measure. I found this to be very interesting. For the infrastructure organization, the IT internal personnel, their engagement score went up 40 points from before we started this transformation. You could see that not only are they getting rescaled or retooled, we make sure we have enough of that expertise in-house, and their engagement scores went up right along with that. It helped us on keeping our employees very motivated and engaged.

Gardner: People like to work with modern technology more than the old stuff, is that not true?


Agarwal: Yes, for sure. I want to work with the iPhone X not iPhone 7.

Gardner: What have you learned that you could impart to others? Now, not many others are going to be doing this reverse separation, modernization, consolidation, application, rationalization process at the same time -- while keeping the companies operating.

But what would you tell other people who are going about application and data center modernization?

Prioritize your partners

Bottome: Pick your partner carefully. Picking the right partner is very, very important, not only the technology partner but any of the other partners along the journey with you, be it application migration or your services partners. Our services partner is DXC. And the majority of the data center is built on HPE gear, along with Arista and Brocade.

Also, make sure that you truly understand all of the other transformations that get impacted by the transformation you’re on. In all honesty, I’ve had some bumps along the way because there was so much transformation going on at once. Make sure those dependencies are fully understood.

Gardner: Piyush, what have you learned that you would impart to others?

Agarwal: It goes back to one of the earlier questions. Understand the business drivers in addition to picking your partners. Know your own level of strength at that point in time.
Learn How the Future
of Hybrid IT
Can Be Made Simple
If we had done this a year and a half ago, the confidence level and our own capability to do it would have been different. So, picking your partner and having confidence in your own abilities are both very important.

Bottome: Thank you, Dana. It was exciting to talk about something that has been a lot of work but also a lot of satisfaction and an exciting journey.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: