Thursday, January 14, 2016

Learn how SKYPAD and HPE Vertica enable luxury brands to gain rapid insight into consumer trends

The next BriefingsDirect big-data use case leadership discussion explores how retail luxury goods market analysis provider Sky I.T. Group has upped its game to provide more buyer behavior analysis faster -- and with more user depth.

Learn how Sky I.T. changed its data analysis platform infrastructure to Hewlett Packard Enterprise (HPE) Vertica -- and why that has helped solve its challenges around data variety, velocity, and volume and make better insights available across the luxury retail marketplace.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To share how retail intelligence just got a whole lot smarter, we welcome Jay Hakami, President; Dane Adcock, Vice President of Business Development, and Stephen Czetty, Vice President and Chief Technology Officer, all at Sky I.T. Group in New York. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's driving the need for greater and better big-data analysis for luxury retailers? Why do they need to know more, better, faster?

Adcock: Well, customers have more choices. As a result, businesses need to be more agile and responsive and fill the customer's needs more completely or lose the business. That's driving the entire industry into practices that mean shorter times from design to shelf in order to be more responsive.

It has created a great deal of gross marketing pressure, because there's simply more competition and more selections that a consumer can make with their dollar today.

Gardner: Is there anything specific to the retail process around luxury goods that is even more pressing when it comes to this additional speed?
Sky I.T. Group
Retail Business-Intelligence Solutions
Get More Information
Adcock: Yes. The downside to making mistakes in terms of designing a product and allocating it in the right amounts to locations at the store level carries a much greater penalty, because it has to be liquidated. There's not a chance to simply cut back on the supply chain side, and so margins are more at risk in terms of making the mistake.
Ten years ago, from a fashion perspective, it was about optimizing the return and focusing on winners. Today, you also have to plan to manage and optimize the margins on your losers as well. So, it's a total package.

Gardner: So, clearly, the more you know about what those users are doing or what they have done is going to be essential. It seems to me, though, that we'rere talking about a market-wide look rather than just one store, one retailer, or one brand.

How does that work, Jay? How do we get to the point where we've been able to gather information at a fairly comprehensive level, rather than cherry-picking or maybe getting a non-representative look based on only one organization’s view into the market?

Hakami: With SKYPAD, what we're doing is collecting data from the supplier, from the wholesaler, as well as from their retail stores, their wholesale business, and their dot-com, meaning the whole omni channel. When we collect that data, we cleanse it to make sure its meaningful to the user.

Hakami
Now, we're dealing with a connected world where the retailer, wholesalers, and suppliers have to talk to one another and plan together for the buying season. So the partnerships and the insight that they get into the product performance is extremely important, as Dane mentioned, in terms of the gross margin and in terms of the software information. SKYPAD basically provides that intelligence, that insight, into this retail/wholesale world.

Gardner: Isn’t this also a case where people are opening up their information and making it available for the benefit of a community or recognizing that the more data and the more analysis that’s available, the better it is for all the participants, even if there's an element of competition at some point?

Hakami: That's correct. The retail business likes to share the information with their suppliers, but they're not sharing it across all the suppliers. They're sharing it with each individual supplier. Then, you have the market research companies who come in and give you aggregation of trends and so on. But the retailers are interested in sell-through. They're interested in telling X supplier, "This is how your products are performing in my stores."

If they're not performing, then there's going to be a mark down. There's going to be less of a margin for you and for us. So, there's a very strong interest between the retailer and a specific supplier to improve the performance of the product and the sell-through of those products on the floor.

Gardner: Before we learn more about the data science and dealing with the technology and business case issues, tell us a little bit more about Sky I.T. Group, how you came about, and what you're doing with SKYPAD to solve some of these issues across this entire supply chain and retail market spot.

Complex history

Hakami: I'll take the beginning. I'll give you a little bit of the history, Dana, and then maybe Dane and Stephen can jump in and tell you what we are doing today, which is extremely complex and interesting at the same time.

We started with SKYPAD about eight years ago. We found a pain point within our customers where they were dealing with so many retailers, as well as their own retail stores, and not getting the information that they needed to make sound business decisions on a timely basis.

We started with one customer, which was Theory. We came to them and we said, "We can give you a solution where we're going to take some data from your retailers, from your retail stores, from your dot-com, and bring it all into one dashboard, so you can actually see what’s selling and what’s not selling."

Fast forward, we've been able to take not only EDI transactions, but also retail portals. We're taking information from any format you can imagine -- from Excel, PDF, merchant spreadsheets -- bringing that wealth of data into our data warehouse, cleansing it, and then populating the dashboard.

So today, SKYPAD is giving a wealth of information to the users by the sheer fact that they don’t have to go out by retailer and get the information. That’s what we do, and we give them, on a Monday morning, the information they need to make decisions.
As these business intelligence (BI) tools have become more popular, the distribution of data coming from the retailers has gotten more ubiquitous and broader in terms of the metrics.

Dane, can you elaborate more on this as well?

Adcock: This process has evolved from a time when EDI was easy, because it was structured, but it was also limited in the number of metrics that were provided by the mainstream. As these business intelligence (BI) tools have become more popular, the distribution of data coming from the retailers has gotten more ubiquitous and broader in terms of the metrics.

But the challenge has moved from reporting to identification of all these data sources and communication methodologies and different formats. These can change from week to week, because they're being launched by individuals, rather than systems, in terms of Excel spreadsheets and PDF files. Sometimes, they come from multiple sources from the same retailer.

One of our accounts would like to see all of their data together, so they can see trends across categories and different geographies and markets. The challenge is to bring all those data sources together and align them to their own item master file, rather than the retailer’s item master file, and then be able to understand trends, which accounts are generating the most profits, and what strategies are the most profitable.
Visit the Sky BI Team
At The 2016 NRF Big Show!
Get More Information
It's been a shifting model from the challenge of reporting all this data together, to data collection. And there's a lot more of it today, because more retailers report at the UPC level, size level, and the store level. They're broadcasting some of this data by day. The data pours in, and the quicker they can make a decision, the more money they can make. So, there's a lot of pressure to turn it around.

Gardner: When you're putting out those reports on Monday morning, do you get queries back? Is this a sort of a conversation, if you will, where not only are you presenting your findings, but people have specific questions about specific things? Do you allow for them to do that, and is the data therefore something that’s subject to query?

Subject to queries

Adcock: It’s subject to queries in the sense that they're able to do their own discovery within the data. In other words, we put it in a BI tool, it’s on the web, and they're doing their own analysis. They're probing to see what their best styles are. They're trying to understand how colors are moving, and they're looking to see where they're low on stock, where they may be able to backfill in the marketplace, and trying to understand what attributes are really driving sales.

But of course, they always have questions about completeness of the data. When things don’t look correct, they have questions about it. That drives us to be able to do analysis on the fly, on-demand, and deliver some responses, "All your stores are there, all of your locations, everything looks normal." Or perhaps there seems to be some flaws or things in the data that don’t actually look correct.

Not only do we need to organize it and provide it to them so that they can do their own broad, flexible analysis, but they're coming back to us with questions about how their data was audited. And they're looking for us to do the analysis on the spot and provide them with satisfactory answers.

Gardner: Stephen Czetty, we've heard about the use case, the business case, and how this data challenge has grown in terms of variety as well as volume. What do you need to bring to the table from the data architecture to sustain this growth and provide for the agility that these market decision-makers are demanding?

Czetty: We started out with an abacus, in a sense, but today we collect information from thousands of sources literally every single week. Close to 9,000 files will come across to us and we'll process them correctly and sort of them out -- what client they belong to and so forth, but the challenge is forever growing.

Czetty
We needed to go from older technology to newer technology, because our volumes of data are increasing and the amount of time that we need to consume to data in is static.

So we're quite aware that we have a time limit. We found HPE Vertica as a platform for us to be able to collect the data into a coherent structure in a very rapid time as opposed to our legacy systems.

It allows us to treat the data in a truly vertical way, although that has nothing to do with the application or the database itself. In the past we had to deal with each client separately. Now we can deal with each retailer separately and just collect their data for every single client that we have. That makes our processes much more pipelined and far faster in performance.

The secret sauce behind that is the ability in our Vertica environment to rapidly sort out the data -- where it belongs, who it belongs to -- calculate it out correctly, put it into the database tables that we need to, and then serve it back to the front end that we're using to represent it.

That's why we've shifted from a traditional database model to a Vertica-type model. It's 100 percent SQL for us, so it looks the same for everybody who is querying it, but under the covers we get tremendous performance and compression and lots of cost savings.

Gardner: For some organizations that are dealing with the different sources and  different types of data, cleansing is one problem. Then, the ability to warehouse that and make it available for queries is a separate problem. You've been able to tackle those both at the same time with the same platform. Is that right?

Proprietary parsers

Czetty: That's correct. We get the data, and we have proprietary parsers for every single data type that we get. There are a couple of hundred of them at this point. But all of that data, after parsing, goes into Vertica. From there, we can very rapidly figure out what is going where and what is not going anywhere, because it’s incomplete or it’s not ours, which happens, or it’s not relevant to our processes, which happens.

We can sort out what we've collected very rapidly and then integrate it with the information we already have or insert new information if it's brand-new. Prior to this, we'd been doing this by hand to a large-scale, and that's not effective any longer with our number of clients growing.

Gardner: I'd like to hear more about what your actual deployment is, but before we do that, let’s go back to the business case. Dane and Jay, when HPE Vertica came online, when Steve was able to give you some of these more pronounced capabilities, how did that translate into a benefit for your business? How did you bring that out to the market, and what's been the response?

Hakami: I think the first response was "wow." And I think the second response was, "Wow, how can we do this fast and move quickly to this platform?"
Prior to this, we'd been doing this by hand to a large-scale, and that's not effective any longer with our number of clients growing.

Let me give you some examples. When Steve did the proof of concept (POC) with the folks from HPE, we were very impressed with the statistics we had seen. In other words, going from a processing time of eight or nine hours to minutes was a huge advantage that we saw from the business side, showing our customers that we can load data much faster.

The ability to use less hardware and infrastructure as a result of the architecture of Vertica allowed us to reduce, and to continue to reduce, the cost of infrastructure. These two are the major benefits that I've seen in the evolution of us moving from our legacy to Vertica.

From the business perspective, if we're able to deliver faster and more reliably to the customer, we accomplished one of the major goals that we set for ourselves with SKYPAD.

Adcock: Let me add something there. Jay is exactly right. The real impact, as it translates into the business, is that we have to stop processing and stop collecting data at a certain point in the morning and start processing it in order for us to make our service-level agreements (SLAs) on reporting for our clients, because they start their analysis. The retail data comes in staggered over the morning and it may not all be in by the time that we need to shut that processing off.

One of the things that moving to Vertica has allowed us to do is to cut that time off later, and when we cut it off later, we have more data, as a rule, for a customer earlier in the morning to do their analysis. They don’t have to wait until the afternoon. That’s a big benefit. They get a much better view of their business.

Driving more metrics

The other thing that it has enabled us to do is drive more metrics into the database and do some processing in the database, rather than in the user tool, which makes the user tool faster and it provides more value.

For example, maybe for age on the floor, we can do the calculation in the background, in the database, and it doesn't impede the response in the front-end engine. We get more metrics in the database calculated rather than in our user tool, and it becomes more flexible and more valuable.
Sky I.T. Group
Retail Business-Intelligence Solutions
Get More Information
Gardner: So not only are you doing what you used to do faster, better, cheaper, but you're able to now do things you couldn't have done before in terms of your quality of data and analysis. Is there anything else that is of a business nature that you're able to do vis-à-vis analytics that just wasn't possible before, and might, in fact, be equivalent of a new product line or a new service for you?

Czetty: In the old model, when we got a new client we had to essentially recreate the processes that we'd built for other clients to match that new client, because they're collecting that data just for that client just at that moment.
In the current model, where we're centered on retailers, the only thing that will take us a long time to do in this particular situation is if there's a new retailer that we've never collected data from.

So 99 percent of it is the same as any other client, but one percent is always different, and it had to be built out. On-boarding a client, as we call it, took us a considerable amount of time -- we are talking weeks.

In the current model, where we're centered on retailers, the only thing that will take us a long time to do in this particular situation is if there's a new retailer that we've never collected data from. We have to understand their methodology of delivery, how it comes, how complex it is and so forth, and then create the logic to load that into the database correctly to match up with what we are collecting for others.

In this scenario, since we’ve got so many clients, very few new stores or new retailers show up, and typically it’s just our clients on retail chain, and therefore our on-boarding is just simplified, because if we are getting Nordstrom’s data from client A, we're getting the same exact data for client B, C, D, E, and F.

Now, it comes through a single funnel and it's the Nordstrom funnel. It’s just a lot easier to deal with, and on-boarding comes naturally.

Hakami: In addition to that, since we're adding more significant clients, the ability to increase variety, velocity, and volume is very important to us. We couldn't scale without having Vertica as a foundation for us. We'd be standing still, rather than moving forward and being innovative, if we stayed where we were. So this is a monumental change and a very instrumental change for us going forward.

Gardner: Steve, tell us about your actual deployment. Is this a single tenant environment? Are you on a single database? What’s your server or data center environment? What's been the impact of that on your storage and compression and costs associated with some of the ancillary issues?

Multi-tenant environment

Czetty: To begin with, we're coming from a multi-tenant environment. Every client had its own private database in the past, because in IBM DB2, we couldn't add all these clients into one database and get the job done. There was not enough horsepower to do the queries and the loads.

We ran a number of databases on a farm of servers, on Rackspace as our hosting system. When we brought in Vertica, we put up a minimal configuration with three nodes, and we're still living with that minimal configuration with three nodes.

We haven't exhausted our capacity on the license by any means whatsoever in loading up this data. The compression is obscenely high for us, because at the end of the day, our data absolutely lends itself to being compressed.

Everything repeats over and over again every single week. In the world of Vertica, that means it only appears once in wherever it lives in the database, and the rest of it is magic. Not to get into the technology underneath it at this point, from our perspective, it's just very effective in that scenario.
With the three nodes, we've had zero problems with performance. It hasn't been an issue at all. We're just looking back and saying that we wish we had this a little sooner.

Also in our IBM DB2 world, we're using quite costly large SAN configurations with lots of spindles, so that we can have the data distributed all across the spindles for performance on DB2, and that does improve the performance of that product.

However, in HPE Vertica, we have 600 GB drives and we can just pop more in if we need to expand our capacity. With the three nodes, we've had zero problems with performance. It hasn't been an issue at all. We're just looking back and saying that we wish we had this a little sooner.

Vertica came in and did the install for us initially. Then, we ended up taking those servers down and reinstalling it ourselves. With a little information from the guide, we were able to do it. We wanted to learn it for ourselves. That took us probably a day and a half to two days, as opposed to Vertica doing it in two hours. But other than that, everything is just fine. We’ve had a little training, we’ve gone to the Vertica event to learn how other people are dealing with things, and it's been quite a bit of fun.

Now there is a lot of work we have to do at the back end to transform our processes to this new methodology. There are some restrictions on how we can do things, updates and so forth. So, we had to reengineer that into this new technology, but other than that, no changes. The biggest change is that we went vertical on the retail silos. That's just a big win for us.

Gardner: As you know, HPE Vertica is cloud-ready. Is there any benefit to that further down the road where maybe it’s around issues of a spike demand in holiday season, for example, or for backup recovery or business continuity? Any thoughts about where you might leverage that cloud readiness in the future?

Dedicated servers

Czetty: We're already sort of in the cloud with the use of dedicated servers, but in our business, the volume increases in the stores around holidays is not doubling the volume. It’s adding 10 percent, 15 percent, maybe 20 percent of the volume for the holiday season. It hasn’t been that big a problem in DB2. So, it’s certainly not going to be a problem in Vertica.

We've looked at virtualization in the cloud, but with the size of the hardware that we actually want to run, we want to take advantage of the speed and the memory and everything else. We put up pretty robust servers ourselves, and it turns out that in secure cloud environments like we're using right now at Rackspace, it's simply less expensive to do it as dedicated equipment. To spin up a machine, like another node for us at Rackspace, would take about same time it would take for virtual system setup and configure to a day or so. They can give us another node just like this on our rack.

We looked at the cloud financially every single time that somebody came around and said there was a better cloud deal, but so far, owning it seems to be a better financial approach.

Gardner: Before we close out, looking to the future, I suppose the retailers are only going to face more competition. They're going to be getting more demand from their end users or customers for user experience for information.
We looked at the cloud financially every single time that somebody came around and said there was a better cloud deal, but so far, owning it seems to be a better financial approach.

We're going to see more mobile devices that will be used in a dot-com world or even a retail world. We are going to start to see geolocation data brought to bear. We're going to expect the Internet of Things (IoT) to kick in at some point where there might be more sensors involved either in a retail environment or across the supply chain.

Clearly, there's going to be more demand for more data doing more things faster. Do you feel like you're in a good position to do that? Where do you see your next challenges from the data-architecture perspective?

Czetty: Not to disparage too much the industry of luxury, but at this point, they're not the bleeding edge on the data collection and analysis side, where they are on the bleeding edge on social media and so forth. We've anticipated that. We've got some clients who were collecting information about their web activities and we have done analysis for identifying customers who are presenting different personas through their different methods as they contact the company.

We're dabbling in that area and that’s going to grow as it becomes so tablet-oriented or phone-oriented as the interfaces go. A lot of sales are potentially going to go through social media and not just the official websites in the future.

We'll be capturing that information as well. We’ve got some experience with that kind of data that we’ve done in the past. So, this is something I'm looking forward to getting more of, but as of today, we’re only doing it for a few clients.

Well positioned

Hakami: In terms of planning, we're very well-positioned as a hub between the wholesaler and the retailer, the wholesaler and their own retail stores, as well as the wholesaler and their dot-coms. One of the things that we are looking into, and this is going to probably get more oxygen next year, is also taking a look at the relationships and the data between the retailer and the consumer.

As you mentioned, this is a growing area, and the retailers are looking to capture more of the consumer information so they can target-market to them, not based on segment but based on individual preferences. This is again a huge amount of data that needs to be cleansed, populated, and then presented to the CMOs of companies to be able to sell more, market more, and be in front of their customers much more than ever before.
Visit the Sky BI Team
At The 2016 NRF Big Show!
Get More Information
Gardner: That’s a big trend that we are seeing in many different sectors of the economy -- that drive for personalization, and it really is a result of these data technologies to allow that to happen.
Any other thoughts about where the intersection of computer science capabilities and market intelligence demands are coming together in new and interesting ways?

Adcock: I'm excited about the whole approach to leveraging some predictive capabilities alongside the great inventory of data that we've put together for our clients. It's not just about creating better forecasts of demand, but optimizing different metrics, using this data to understand when product should be marked down, what types of attributes of products seem to be favored by different locations of stores that are obviously alike in terms of their shopper profiles, and bringing together better allocations and quantities in breadth and depth of products to individual locations to drive better, higher percentage of full-price selling and fewer markdowns for our clients.

So it’s a predictive side, rather than discovery using a BI tool.

Czetty: Just to add to that, there's the margin. When we talked to CEOs and CFOs five or six years ago and told them we could improve business by two, three, or four percent, they were laughing at us, saying it was meaningless to them. Now, three, four, or five percent, even in the luxury market, is a huge improvement to business. The companies like Michael Kors, Tory Burch, Marc Jacobs, Giorgio Armani, and Prada are all looking for those margins.
I'm excited about the whole approach to leveraging some predictive capabilities alongside the great inventory of data that we've put together for our clients.

So, how do we become more efficient with a product assortment, how do we become more efficient with distribution and all of these products to different sales channels, and then how do we increase our margins? How do we not over-manufacture and not create those blue shirts in Florida, where they are not selling, and create them for Detroit, where they're selling like hotcakes.

These are the things that customers are looking at and they must have that tool or tools in place to be able to manage their merchandising and by doing so become a lot more agile and a lot more profitable.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Monday, January 11, 2016

Is 2016 the year that accounts payable becomes strategic?

The next BriefingsDirect business innovation thought leadership discussion focuses on the changing role and impact of accounts payable (AP) as a strategic business force.

We’ll explore how intelligent AP is rapidly transforming by better managing exceptions, adopting fuller automation, and implementing end-to-end processes that leverage connected business networks.

As the so-called digital enterprise adapts to a world of increased collaboration, digital transactions, and e-payables management -- AP is needing to adapt in 2016.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the future of AP as a focal point of automated business services we are joined by Andrew Bartolini, Chief Research Officer at Ardent Partners in Boston, and Drew Hofler, Senior Director of Marketing at SAP Ariba. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Drew, let’s look at the arrival of 2016. We have more things going on digitally, we have a need to improve efficiency, and AP has been shifting -- but how will 2016 make a difference? What should we expect in terms of AP elevating its role in the enterprise?

Hofler: AP is one of those areas that everybody looks at, first and foremost, as a cost center. So when AP looks at what they can do better, they've typically thought about efficiency and cost savings first. That’s the plus side of the cost center as saving money by spending less.

Hofler
But what we've been seeing happening over the last year or so, and what will accelerate in 2016, is that AP begins to move more from just a cost saving and efficiency focus to value creation. And this is where they sit in the hub of one of the three critical elements of working capital -- inventory, receivables and payables -- and AP sits squarely on that last one.

And they have influence over that which affects the company's working capital. AP has become so very important for companies by creating the efficiencies in the invoice process, it opens up opportunities, and they're going to be able to affect a company’s working capital for the positive going forward. That’s going to grow as they move beyond just the automation that is the foundation to then seeing the opportunities that come out of that.

Gardner: Andrew, do you see AP also as a digital hub, growing in its role and influence and being able to increase its value beyond cost efficiency into these other higher innovation levels or strategic levels of benefit?

Tracking trends

Bartolini: Yes, absolutely. I've been researching and working in this space for 17 years, doing significant market research over the last 11 years. So I've been tracking the trends and the ebbs and flows of relative interest and investment in AP.

What we've seen in 2015 in some of our most recent research is that there has been a broader focus or a shift away from viewing the AP opportunity as an efficiency one or solely an efficiency one. Let’s automate. Let’s reduce our costs in processing invoices. Let’s reduce our costs in payments.

Bartolini
But what we saw this year for the first time in our research was that the top area of focus, the top business pressure that’s driving investments in AP transformation was the need to get better visibility into all the valuable information that comes across the AP departments or through the AP operation, both on the invoice and the payment side.

That begins to change the conversation. We talked about the evolution of AP moving from a strictly back-office siloed department to an increasing point of collaboration with procurement at the purchase-to-pay (P2P) process, with treasury, from a cash-management perspective. Now, we see it starting to move and becoming a true intelligence hub, and that’s where we've seen some momentum. There’s a lot of wind in the sails for AP, really pushing that forward in 2016 and beyond.

Gardner: Andrew, what’s driving this? Is this the technology that's now allowing that data?

Bartolini: There are a couple of factors underlying this movement. The first is taking the broader perspective within business as a whole. Businesses can no longer allow distinct business functions to operate within silos. They need everybody on the same team, rowing in the same direction. That has forced greater collaboration.

That’s something that we've seen more broadly between procurement and finance over the past couple of years, specifically with the role of the CPO and the CFO. A majority of organizations see a very strong level of collaboration within those two job roles and within their departments as a whole.

That has opened up larger opportunities for AP, which is a more tactical function as it relates to procurement, but by bringing the two groups together, you now have shared resources and shared focus on improving the entire source-to-settle process.

That relationship has driven greater interest, because the opportunities are fantastic for procurement to leverage the value of a more efficient AP process and to be able to see the information that’s there.

As Drew mentioned, by becoming more efficient on the front end of the AP process, organizations are doing a better job in reducing the amount of paper that’s coming in through the front door. They're processing their invoices faster. That's opening up opportunities on the back-end, on the payment side.

So, you have a confluence of those factors and you see newer solutions in the marketplace as well that are really changing the view that AP departments have of what defines a transformation. They're thinking more holistically across the entirety of the AP process, from invoice receipt, all the way through payment and settlement.

Allowing for variables

Gardner: Drew, it seems that over history, once a contract is closed the terms remain fairly rigid, and then there is a simple fulfillment aspect to it. But it sounds like -- as we get more visibility, as we get digitized, and we can automate -- we can handle exceptions better and allow for more variables.

I've heard instances where the terms can be adjusted, that market forces can provide for ways in which a deal gets amended as an ongoing basis, whether it's in terms of payment, whether perhaps there are other ancillary issues. Is that what we are seeing, that the digital transformation is giving us more opportunity to be flexible, and is that then elevating the role of the AP organization?

Hofler: You make a couple of good points there, and it really springs from what Andrew just said about not having to silo or not staying in that siloed place where AP and procurement are separate or the processes are separate, because what companies have realized, particularly as the digital age has made it possible, is that the procure-to-pay process, the source-to-settle process, is a fundamentally connected one.

Over the years they've operated very disconnectedly, with hand-offs, where procurement does its thing, writes a contract and then hands it off once the purchase order (PO) goes out the door, and then AP takes up the process from there. But in that, there are a lot of disconnects.
What companies have realized, particularly as the digital age has made it possible, is that the procure-to-pay process, the source-to-settle process, is a fundamentally connected one.

When you're able to bring networked systems together to bring visibility across that entire process, now you have the AP group acting in a more strategic manner to deliver value by acting as the value-capture group.

For example, prior to this age that we live in now, a contract would be written, it would have specific terms for specific items and specific prices for specific SKUs, and maybe some volume discounts. AP had no idea about that, because these contracts would get signed and they get put in a file cabinet or stuck in a PDF file somewhere, and the AP had no idea. So they went off of the invoice that came in.

This is how an entire industry about post-audit recovery came about, to go after the fact and try to claw back over-payment, because there's no visibility in AP to what procurement did.

By bringing these together in a system, on a network, you're able to automatically capture those savings, because AP now has visibility into what’s happening inside of that contract, and can insure on an automated basis that they are paying the right amount. So, it becomes not just a buy-right thing from the procurement side, but a pay-right thing as well, a buy- and pay-right tied together.

But that's your point about terms. Yes, you have certain terms tied into that contract, but again, that's set at the beginning of a relationship with a supplier. There are lots of opportunities that come up when everybody has visibility into what's going on, into an early-approved invoice for example.

Opportunities for collaboration

There are lots of opportunities that arise for collaboration where maybe the situation has changed a little bit. Maybe a supplier, instead of being paid in 45 days, now would very much like to be paid in five days, because they have payroll ahead or they have an equipment purchase to make, and they want to accelerate their cash flow.

In a disconnected world, you can't account for that. But in a networked world, where there is visibility, I like to say that it's the confluence of visibility, opportunity, and capability where all parties have visibility into the opportunity created by efficiencies with that earlier approved invoice. Then, there's the capability inside the system to simply click a button and accelerate that cash flow or modify those terms on that contract, in those payment terms.

So this idea of P2P being a linked value chain and the digital technology of today can bring those together so that there are no barriers to that information flow and that creates all sorts of opportunities for all parties involved.

Gardner: Andrew, we're having a common denominator here of visibility, the visibility is what allows for a lot of these efficiencies and innovation values to occur, where does that visibility come from, where does the data get generated, how is it shared, and how do we further reduce the silos through the free flow of data analysis and information?

Bartolini: Visibility at the core starts with automation tools that automate processes. If we're looking at the P2P process, you're looking at an eProcurement system. You can go back to where it starts, from sourcing and contracting. If you have contract visibility or at least visibility into your header-level information, you begin to have an understanding of what, in fact, the relationship is and what relationships you have as an organization, who are your preferred suppliers, who are your strategic suppliers.
Visibility at the core starts with automation tools that automate processes.

As you start to drill down, if you have the capabilities to capture things like payment terms and service-level agreements (SLAs). That information begins to provide a more robust view of the relationship that can then be more strategically managed from a procurement perspective, and then really sets up the operational procurement side.

If you have an eProcurement system, you're able to generate purchase orders against those contracts and you're ensuring that before the purchase order is even sent to the supplier, the pricing and the terms are correct.

That cascades over onto the AP automation side. We use the term "ePayables" very broadly to describe AP automation solutions. When you have an eProcurement and an ePayables solution connecting, you begin to have greater visibility within the enterprise for the entirety of the relationship and the entirety of the transaction.

On the flip side, where we haven't gotten to the value proposition for suppliers who really view their customer relationship as a single one, what often happens is they have multiple relationships within that customer that really aren't needed. They negotiate a contract, they have their internal customer, and then they are dealing with maybe a procurement department and then trying to then figure out who they are dealing with on the AP side.

When you’ve got visibility that can be shared with trading partners, you get extraordinarily greater value out of the entire thing, and you streamline relationships and you're able to focus on the more important aspects of those relationships. But to the original question, visibility starts and ends with technology.

Centralizing procurement

Gardner: We're also seeing the trend of larger organizations centralizing procurement, sometimes placing it, if it's a global organization, in another country instead of having it in multiple countries or multiple markets. It becomes consolidated and automated. How does that fit in, Drew?

Hofler: We see definitely a move toward a shared service or a global process ownership type of thing, where they want to take the variability out of the different geographies or different business units doing what is essentially a standardized process, or they want to make that standardized.

We definitely see the movement in that, and it's both a business desire and goal to remove the variability, but it's something that's enabled by the technology that we have today in business networks, in centralized systems, that can tie all of this together. Now you have business units operating across the world, but tapping in all of that information, tapping in, getting all the invoices to come into one place through a network. Those business units can see that. Those business units have access at a controlled pace to the information that they need inside of those systems as well.
On the procurement side, if you're sourcing globally, you can have different centers of excellence.

For the ability to connect the data to everybody, to turn that data not just from an information but to intelligence, getting it in front of the right people at the right time and the right process, the business networks really, really help to drive that. Having that centralized network hub where everybody can connect at the point of the process that they need really helps drive or enable the movement towards shared service and centralized AP procurement.

Bartolini: Anyone would be hard-pressed to make a case that you should have a decentralized AP operation. That doesn't mean that you can't have staff that are geographically dispersed, but there's no reason why that should exist.

On the procurement side, if you're sourcing globally, you can have different centers of excellence. Again, you want to have a more centralized view into visibility and to be utilizing the same systems and processes. On the AP side, centralization also helps from the standpoint that you begin to get a better sense of what resources are being applied in the AP process today. It also becomes easier to centralize or to gain budgets for investment in tools that can drive efficiency, visibility, and all the things we've just been talking about.

Gardner: Another thread that I’m hearing in our conversation is that technology needs to be exploited, visibility gained, and automation made possible. Then, centralization can become a huge benefit from all of that. But none of this is possible if we don't go all digital. If we don't get off of manual processes and get off of paper. What do you think is going to be the ratio, if you will, of a paper approach that's left? Are we finally going to pull the last paper invoice out, or the last payment that's manual? Where are we, Drew, when it comes to making that full transition to digital? It seems to me an overwhelmingly beneficial direction.

Still using paper

Hofler: I've been in the payment space for about 20 years and the payable space for the last 10, and in payment, there have been predictions in that space that we would get rid of the paper check completely. Gosh, for the last 20 years everybody is saying it's going to happen, but it hasn't. It's still about 50 percent paper checks.

So I'm not going to make a prediction that paper is going to go away, but most definitely, companies need to deal with and move toward electronic data. Even if it's paper based, a lot of companies are moving toward getting the data in electronically, but a lot of them say, "Well, I get my paper scanned, I've sent it to a scanning service or whatever, and I get it in PDF or electronic data form."

That's fine and that's one step along the process, but companies are realizing that there's a limitation in that. When you do that, you're simply getting the data that was on that paper source document faster. If that paper source document data is garbage, and that's what creates exceptions, then you're just getting the exceptions quicker, and that doesn't really help the process, that doesn't really solve the true issue of making sure you're not only getting the data faster, but that you get it in clean and that you get it in better.
This is where companies need to move toward full electronic invoicing, where it starts its life as an electronic invoice.

This is where companies need to move toward full electronic invoicing, where it starts its life as an electronic invoice, so that a supplier can submit it and have it run through business rules electronically before it even gets the AP. They can identify the exceptions and turn it around to the supplier and have them correct it, all in a very quick and automated fashion, so that by the time AP gets it it's 98 percent exception free or straight-through processing.

Companies are going to realize that just transforming a paper source document into an electronic form has had value in the past, but its value is quickly running out, and they need to move toward true electronic.

How far we are going to get along that path? Well, that’s a big prediction to make, but I think we'll move along way down that path. Companies definitely need to recognize, and are starting to recognize, that they need to deal with native electronic data in order to truly gain value, efficiency, and intelligence and be able to leverage that into other opportunities.

Gardner: We mentioned exception handling, exception management, making that easier, better, faster. It strikes me that exception management is really a means to a greater end, and the greater end is general flexibility -- even looking at things as markets, as auctions, where there's variability and a fit-for-purpose kind of mentality can come in.

So am I off in some pie-in-the-sky direction, Andrew? Or when we think about the ability to do exception management, are we really opening up the opportunity to do even more interesting, innovative things with business transactions?

Reduction of exceptions

Bartolini: No, I don’t think it’s pie in the sky. One of our recent surveys of about 200 or so AP finance, and P2P professionals, a question was asked, what’s the number one game changer that will get your AP operation to the next level of performance? And the answer that came in loud and clear was the reduction of exceptions and the ability to perform root-cause analysis in a much more significant way.

So it’s a fundamental problem, and the opportunity is for a majority of things. About two-thirds of organizations feel that if they could handle this issue better, if they could reduce that number, they would be operating at a significantly higher level.

We haven’t really talked too much about the suppliers in this equation, but a lot of business focus and a lot of the themes in our research this year and into 2016 has been focused on agility and the need for organizations to become more adept and responsive to market shifts and changes.

Part of that is getting better alignment with the strategic suppliers that are going to drive more value and that are having a greater impact on the company's own products and services and ultimately their results.
When that noise in the relationship is reduced it allows organizations to focus on goals and objectives and to invest more in the strategic elements of the relationship.

So, you look at something like exceptions that are problematic for both sides of the trading-partner equation, when you start to reduce those, when you start to eliminate a lot of the friction that is built in, certainly around the manual P2P process, but can exist even in an automated environment. When that noise in the relationship is reduced it allows organizations to focus on goals and objectives and to invest more in the strategic elements of the relationship.

Gardner: Drew, anything to add to that, particularly when you consider that the pool of suppliers is, in a sense, growing when we look at contingent workers, when we look at different types of suppliers as smaller firms, perhaps located at a much greater geographic distance than in the past. We have more open markets as a result of connected business networks. How do you see that panning out in 2016?

Hofler: Yeah, there's definitely a growth in that. There's a pretty good stat that shows that a much larger portion of a company's workforce is not bound to that company, and it's a temporary, it's a contingent workforce, it's services that are from contractors that aren't necessarily tied to them.

The need to handle that, particularly the churn that happens with that, the broader number of contractors that you might have with that, the variability in the services that are asked for, that are needed, all of this adds layers of complexity, potentially, to AP, and to procurement as well. We're focused more on AP here, but it adds layers of complexity in managing that and approving that, and as a result, can add a significant number of exceptions.

So, while you're operating your business in a way that is a little more fitting in today’s world, you're also adding a lot of complexity and exceptions to the process, unless you’ve got a way to automatically build in the ability to define the invoice and to identify the exceptions so that these various suppliers who are much smaller and geographically dispersed can submit online or can submit electronically and can do so in a way that's standardized, even across this large group.

Catching exceptions

The exceptions can be caught right away, for example, field services. If there's a service sheet form that was put out by procurement to hire somebody to go fix an oil well, and they get out to the oil well and there’s more to be done than what was on that, they have to get approval for that. To have the ability to get that approval online, automatically, through a mobile device, and have it tied directly into the invoice, and have the invoice close that eliminates all those potential breakpoints of finally getting that invoice in and getting the exceptions dealt with and approved.

Exceptions to me aren't just a matter of, "Gosh, they're hard to do." They're something we want to get rid of. But exceptions are simply the barrier to the opportunity that comes when you can get that invoice moved through and approved right away, not necessarily a matter of paying the invoice faster from the payer’s perspective, but the ability to have it approved and ready to go right away, so that you have options, and so that the supplier has options potentially for cash flow and things like that.

Exceptions become something that we have to eliminate in order to get to that opportunity, but without the platform to do that, to your point, the dispersed workforce, and the increasing contractors, they can make it even harder than it is or than it has been.

Gardner: When we look at the payoffs from doing things better using AP intelligence and technology, we are not just looking at efficiency for its own sake. I think you're opening up more opportunity, as you put it, to the larger business.
If procurement and accounts payable can adjust and react rapidly to complexity, to exceptions, to new ways of doing business -- this is a powerful tool to the business at large.

If procurement and accounts payable can adjust and react rapidly to complexity, to exceptions, to new ways of doing business -- this is a powerful tool to the business at large. They can go at markets differently. They can acquire goods and services across a wider portfolio of choices, a wider marketplace, and therefore be able to perhaps get things easier, faster, cheaper.

Let’s look at this idea of non-tangible payoffs that elevates the value of AP to being a sophisticated intelligent operation. Let's start with Andrew. What are some of the intangibles -- if we do all the above that we have mentioned well – how does this empower the organization in ways that we haven't seen before?

Bartolini: That’s a great question and it gets back to the one point I was just making about agility. If you were to argue that we're operating in an age of innovation, where globalization and the level of competition, and the speed of business in general has really accelerated the time frames that organizations must react -- I think this is happening at a much faster pace.

You can see that in areas like the consumer electronics market, and in all industries, product lifecycles are shortening, and so the windows of opportunity to maximize sales and revenues in the marketplace are much shorter as well.

Things are happening at a much faster clip and in tighter time frames. This has created a much greater reliance upon your suppliers and upon your supply chain. And so having visibility across the P2P process, across the source-to-settle process, and having much tighter relationships with your strategic suppliers ultimately positions the organization to become much more agile and much more competitive. And that's the value dividend that's created from a more streamlined P2P process.

It’s being able to more fully optimize the relationships that you have with your suppliers, and it's being able to make decisions and shifts in a much faster way than in the past, and that's not just from the sourcing side, that carries all the way through to the payment side as well.

Business agility

Gardner: Drew, when we think about the strategic role of AP -- of providing business agility -- you can’t get more strategic than that.

Hofler: No, that's right. AP particularly can become the source of much of that strategic intelligence that companies need. They can't just see themselves as processing paper or as a back-office cost center, but as being the ones that capture that can, through their use of systems and investment in systems and networks, capture the data in invoices, for example, and can feed that data into the sourcing cycle at the beginning, so it becomes a virtuous circle.

They can create the opportunity for the company to meet some of their very strategic goals around working capital. So now AP and their ability to tie into what procurement has done before them and automate the process and get things done very nimbly and ready to go and create this opportunity, are creating opportunity for treasury as well, so now you have got a third party in there.

The treasurer is very concerned about what his liabilities are out there, what the payments liabilities are. Does he know? Often, in today’s world, treasurers can’t see their payable liabilities until they run through their payment cycle and they're ready to be paid the next day. So they have to move cash around to make sure that they have enough cash to manage those liabilities going out.

With visibility into what’s going to be paid out 30 days from now, having that 30 days in advance offers the treasurer all sorts of options on how to manage their cash among various different bank accounts.
It gives the treasurer the opportunity to pay that supplier early, using excess cash that’s sitting in a bank account.

Plus, it gives them the option to do things around their days payable outstanding (DPO), to bring third parties into a business network, to bring in third-party supply chain finance that allow a supplier who might need early payment liquidity and early cash flow to access that from a third party while the buying organization is able to hold on to their cash, and so extend their DPO and improve their working-capital management.

Or it gives the treasurer the opportunity to pay that supplier early, using excess cash that’s sitting in a bank account. Even though the Fed just raised rates in the last day or two, they only raised it a quarter of a percent. So it’s still not earning very much. But now, a treasurer can take that and pay a supplier early in exchange for a discount that earns them something along the lines of 8-12 percent annually.

It opens up options, but right at the nexus of all of that opportunity, information, and intelligence sits AP. That’s a very strategic place for AP to be if they can get their hands around that data, create those opportunities, and make it visible to the rest of the business.

Gardner: One last area to get into for 2016 … One of the top concerns in addition to business agility for companies and organizations is risk, security, and dealing with compliance issues, with regulatory issues. Is there something that AP brings to the table when it has elevated itself to the strategic level, with that visibility with that data, with the ability to act quickly and be able to take on exceptions and work through them?

Andrew, we've heard about how, on the procurement side, that examining the supply chain, knowing that supply chain, being able to head off interruptions or other issues, having business continuity mindset is important. Does that translate over to AP, and why and how does AP have a larger role in issues around continuity?

Risk mitigation

Bartolini: From a risk-mitigation standpoint, when you have greater assurances, that the invoices are matched to the PO, to the orders that have been generated, to what has been delivered, when you have a clear view into how that payment is made, across and into the supplier’s account, you're reducing the opportunities for fraud, which can exist in any type of environment, manual or fully automated. One of the largest risk-mitigation opportunities for AP is really at the transactions level.

When you start to cascade the visibility that AP generates out into the larger organization, you can start to do some predictive analysis from the procurement side to better understand potential issues that suppliers may be facing.

Also from a treasury standpoint, when you have visibility into the huge amount of money that is being paid out by AP, you have a better sense of your company’s liquidity, your cash positions, and what you need to do to ensure that you maintain that liquidity.

Looking on the supplier’s side, when you're processing invoices more quickly and you have the opportunities to make payments early, there are those opportunities for the larger companies to step in and help out some of their struggling suppliers, whether that’s paying their invoices early or some other mechanism. It starts with visibility, and from that visibility you start to have a better ability to make smarter decisions and to anticipate potential issues.
They may have had an otherwise healthy business, but not sufficient cash flow to maintain operations, and that hurt buying organizations who depend on them.

Gardner: Last word to you Drew on this issue of risk reduction, continuity, using intelligence to head off disruption or fraud, how do you see that panning out in 2016?

Hofler: I think AP does play a large role in that. Andrew touched on some of that.

One of the key areas, if you think about supply chain and from the procurement side, the financial supply chain is pretty much just as important as the physical supply chain when it comes to risk. As we learned, people have gotten it deep in their bones since 2008 and 2009 when liquidity became a very big issue. There was liquidity risk in supply chains of suppliers who couldn’t access cash flow or didn’t have sufficient cash flow. They may have had an otherwise healthy business, but not sufficient cash flow to maintain operations, and that hurt buying organizations who depend on them.

By being able to approve invoices very quickly and offer up to your suppliers, through a single portal, a single network access, access to cash, either from a buying company using their own or bringing in third-party financing, you essentially are able to eliminate or greatly mitigate liquidity risk in your supply chain.

But there are other areas of risk, too. Anytime you're talking about AP, Andrew said it the right way, where he talked about the massive amounts of money that AP is paying out. That’s their job.

In order to do that, they have to actually capture, manage, and maintain bank account information from their suppliers in order to pay electronically. We're always trying to get away from paper checks, because paper checks, we know, are rife with fraud, very horribly opaque and very slow, but electronic payments require them to capture bank account information. And that’s not a core competency of most AP departments.

Network power

But AP departments can tap into the power of network ecosystems that bring in third parties whose core competency that very much is, to eliminate their need to ever even see a supplier’s bank account information.

Some forward-looking AP departments are looking at how they can divest themselves of that which is not their core competency, and in some ways around risk mitigation and payment, one of them is getting rid of having to touch bank account information.

Beyond that, when we talk about compliance and that type of thing, AP sits right in the middle of that, whether that be from VAT compliance in Europe, to archival compliance, to stocks compliance here in the US, having all of the data electronic and having an auditable trail and being able to know exactly where every piece of data and every dollar or euro spent has been and where it went along the way and having a trail of that automatically capture and archived, that goes a long way towards compliance.

AP is the one that sits right there to be able to capture that and provide that.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Friday, January 8, 2016

Redmonk analysts on best navigating the tricky path to DevOps adoption

The next BriefingsDirect analyst thought leadership discussion explores the pitfalls and payoffs of DevOps adoption -- with an emphasis on the developer perspective.

We're joined by two prominent IT industry analysts, the founders of RedMonk, to unpack the often tricky path to DevOps and to explore how enterprises can find ways to make pan-IT collaboration a rule, not an exception.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

With that, please join me in welcoming James Governor, Founder and Principal Analyst at RedMonk, and he is based in London, and Stephen O'Grady, also Founder and Principal Analyst at RedMonk, and he is based in Portland, Maine. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Gentleman, let’s look at DevOps through a little bit of a different lens. Often, it’s thought of as a philosophy. It’s talked about as a way of improving speed and performance of applications and quality, but ultimately, this is a behavior and a culture discussion -- and the behavior and culture of developers is an important part of making DevOps successful.

What do developers think of DevOps? Is this seen as a positive thing or a threat? Do they have a singular sense of it, or is it perhaps all over the map?

O'Grady
O’Grady: The overwhelming assessment from developers is positive, simply because -- if you look at the tasks for a lot of developers today -- it’s going to involve operational tasks.

In other words, if you're working, for example, on public-cloud platforms, some degree of what you're doing as a developer is operational in nature, and vice versa, once you get to the operational side. A lot of the operational side has now been automated in ways that look very much like what we used to expect from development. 

So there is very naturally a convergence between development and operations that developers are embracing.

Driven by developers

Governor: I think developers have driven the change. We've seen this in a number of areas, whether it’s data management or databases, where the developers said, "We're not going to wait for the DBA anymore. We're going to do NoSQL. We're just going to choose a different store. And we're not going to just use Oracle." We've seen this in different parts of IT.

Governor
The bottom line is that waterfall wasn’t working. It wasn’t leading to the results it should, and developers were taking some of the heat from that. So engineers and developers have begun to build out what has now becomes DevOps. A lot of them were cloud natives and thought they knew best, and in some cases, they actually did some really good work.

Partly enabled by cloud computing, DevOps had made a lot of sense, because you're able to touch everything in a way that you weren’t able to on your own prem. It has been a developer-led phenomenon. It would be surprising if developers were feeling threatened by it.

Gardner: Enterprises, the traditional Global 2000 variety, see what happens at startups and recognize that they need to get on that same speed or agility, and oftentimes those startups are developer-centric and culturally driven by developers.
Learn More About DevOps
Solutions That Unify Development and Operations
To Accelerate Business Innovation
If the developers are, in fact, the tip of the arrow for DevOps, what is it that the operations people should keep in mind? What advice would you give to the operations side of the house for them to be better partners with their developer core?
Governor: The bottom line is that it’s coming. This is not an option. An organization could say we have this way of doing ops and we will continue doing that. That’s fine, but to your point about business disruption, we don’t have time to wait. We do need to be delivering more products to market faster, particularly in the digital sphere, and the threat posture and the opportunity posture have changed very dramatically in the past three years.

It's the idea that Hilton International or Marriott would be worrying about Airbnb. They weren’t thinking like that. Or transport companies around the world asking what the impact of Uber is.

We've all heard that software is eating the world, but what that basically says is that the threats are real. We used to be in an environment where, if you were a bank, you just looked at your four peer banks and thought that as long as they don’t have too much of an advantage, we're okay. Now they're saying that we're a bank and we're competing with Google and Facebook.

Actually, the tolerance for stability is a little bit lower than it was. I had a very interesting conversation with a retailer recently. They had talked about the different goals that organizations have. And it was very interesting to me that he said that, on the first day they launched a new mobile app, it fell over. And they were all high-fiving and fist pumping, because it meant they had driven so much traffic that the app fell over, and it was something they needed to remediate.

That is not how IT normally thinks. Frankly, the business has not told IT they want it to be either, but it has sort of changed. I think the concern for new experiences, new digital products is now higher than the obsession with stability. So it is a different world. It is a cultural shift.

Differentiator

Gardner: Whether you're a bank or you're making farm equipment, your software is the biggest differentiator you can bring to the market. Stephen, any thoughts about what operations should keep in mind as they become more intertwined with the developer organization?

O'Grady: The biggest thing for me is a variety of macro shifts in the market, things like the availability of open-source software and public cloud. It used to be that IT could control the developer population. In other words, they were essentially the arbiter of what went to production and what actually got produced. If you're a developer and you have a good idea, but you don’t have any hardware or infrastructure, then you're out of luck.

These days, that’s changed, and we see these organizationally, where developers can go to operations and say, they need infrastructure, and operations will say six months. The developers say, "To hell with six months. I'm going to go to Amazon and I have something up in 90 seconds." The notion that's most important for operations is that they're going to have to work with developer populations because, one way or another, developers are going to get what they want.

Gardner: When we think about the supplier, the vendor, side of things, almost every vendor I've talked to in the last two or three months has raised the question of DevOps. It has become top of mind for them. Yet, if you were to ask an organization, how do you install DevOps, how do you buy DevOps, which shape box does it come in, none of those questions are relevant because it’s not a product.
If you're in ops and you are not currently looking at tools like Chef, Puppet, Ansible, or SaltStack, you're doing yourself a disservice. They're powerful tools in the arsenal.

How do the vendors grease the skids toward adoption, if you will? What do you think needs to happen from those tools, platforms and technologies?

Governor: It’s very easy to say that DevOps is not a product, and that’s true. On the other hand, there are some underlying technologies that would have driven this, particularly in automation and the notion of configuration is code.

If you're in ops and you are not currently looking at tools like Chef, Puppet, Ansible, or SaltStack, you're doing yourself a disservice. They're powerful tools in the arsenal. 

One of the things to understand is that in the world of open source, it's perhaps going to be packaged by a more traditional vendor. Certainly, one of the things is rethinking how you do automation. I would counsel anyone in IT ops to at least have a team starting to look at that, perhaps for some new development that you're doing.

It’s easy to say that everything is a massive transformation, because then it’s just a big services opportunity and there's lots of hand waving. But at the end of the day, DevOps has been a tools-driven phenomenon. It’s about being closer to the metal, having better observability, and having a better sense of how the application is working.

One of the key things is the change in responsibility. We've lived in an environment where we remember the blame game and lots of finger pointing. If you look at Netflix, that doesn’t happen. The developer who breaks the build fixes it.

There are some significant changes in culture, but there are some tools that organizations should be looking at.

What can they do?

O’Grady: If we're talking from a vendor perspective, they can talk to their customers about the cultural change and organizational change that’s necessary to achieve the results that they want, but they can't actually affect that. In other words, what we're talking about, rather, is what they can do.

The most important thing that vendors who play in this and related spaces can do is understand that it’s a continuum. From the first code that gets written, to check-in, to build, to being deployed on to infrastructure that’s configured using automated software, it’s a very, very long chain, a very long sequence of events.

Understanding from a vendor perspective where you fit into that lifecycle, what are the other pieces you have to integrate with, and from a customer perspective what are all the different pieces they are going to be using is critical.

In other words, if you're focused on a particular functional capability and that's the only thing that you are aware of and that’s the only thing that you tackle, you're doing your customer a disservice. There are too many moving pieces for any one vendor to tackle them all. So it’s going to be critically important that you're partner-friendly, project-friendly and so on and integrate well and play nicely with others.
Learn More About DevOps
Solutions That Unify Development and Operations
To Accelerate Business Innovation
Governor: But also, don’t let a crisis go to waste. IT ops has budget, but they're also always getting a kick in the teeth. Anything that goes wrong is their fault, even if it’s someone else's. The simple fact is that we're in an environment where organizations, as I've said, are thinking that the threat and opportunity posture has changed. It's time to invest in this.

A good example of this would be that we always talk about standardization, but then nobody wants to give us the budget to do that. One of the things that we've tended to see in these web-native companies and how they manage operations and so on is that they've done an awful lot of standardization on what the infrastructure looks like. So there is an opportunity here. It’s a threat and an opportunity as well.

Gardner: I've been speaking with a few users, and there are a couple of rationales from them on what accelerates DevOps adoption. One of them is security and compliance, because the operations people can get more information back to the developers. Developers can insist that security gets baked in early and often.

The other one is user experience. The operations side of the house now has the data from the user, especially when we talk about mobile apps and smaller customer-facing applications and web apps. All that data is now being gathered. What happens with the application can be given back to development very quickly. So there is a feedback loop that's compressed.

What do you think needs to happen in order for the incentives to quicken the adoption of DevOps from the perspective of security, user experience, and feedback loops of shared data?

Ongoing challenge

Governor: That’s such a good question. It’s going to remain an ongoing challenge. The simple fact is that, as I said about the retail and the mobile app, different parts of the business have different goals. Finance doesn't have the same goals as sales, and sales does not have the same goals as marketing in fact.

Within IT, there are different groups that have had very different key performance indicators (KPIs), and that’s part of the discussion. I think you are absolutely right to bring that up, understanding what are the metrics that we should be using, what are the new metrics for success? Is it the number of new products or changes to our application code that we can run out?

We're all incredibly impressed by Etsy and Netflix because they can make all of these changes per day to their production environment. Not everybody wants to do that, but I think it’s what these KPIs are.

It might be, as Stephen had mentioned, if previously we were waiting six months to get access to server and storage, and we get that down to a minute or so, it’s pretty obvious that that’s a substantive step forward.
The big one for me is user experience, and that to me is where a lot of the DevOps movement has come from.

You're absolutely right to say that it is about the data. When we began on this transition around agile and so on, there was a notion that those guys don’t care about data, they don’t care about compliance. The opposite is true, and there has been a real focus on data to enable the developer to do better work.

In terms of this shift that we're seeing, there's an interesting model that, funnily enough, HPE has begun talking about, which is "shifting left." What they mean by that is taking that testing earlier into the process.

We had been in an environment where a developer would hand off to someone else, who would hand off to someone else, at every step of the way. The notion that testing happens early and happens often is super important in this regard.

Gardner: Continuous delivery and service virtualization are really taking off now. I just want to give Stephen an opportunity to address this alignment of interests, security, user experience, and shared data, and thoughts about how organizations should encourage adoption using these aligned interests.

User experience

O’Grady: I can’t speak to this query angle as much. In other words, there are aspects to that, particularly when we think about configuration management and the things that you can do via automation and so on.

The big one for me is user experience, and that to me is where a lot of the DevOps movement has come from. What we've found out is that if you want to deliver an ideal experience via an application to 100 people or 1,000 people, that’s not terribly difficult, and what you are using infrastructure-wise to address that is also not a sort of huge challenge.

On the flip side, you start talking millions and tens of millions of users, hundreds of millions of users potentially, then you have a completely different set of challenges involved. What we've seen from that is that the infrastructure necessary to deliver quality experiences, whether you're Netflix, Facebook, or Google, or even just a large bank, that's a brand-new challenge.
But security is definitely an elephant stomping around the room. There's no question. The feedback loop around DevOps has not been as fixated on security as it might be.

Then, when you get into not just delivering a quality experience through a browser, but delivering it through a mobile application, this encourages and, in fact, necessitates a series of architectural changes to scale out and all these other sort of wonderful things that we talk about.

Then, if we're dealing with tens of thousands or hundreds of thousands of machines, instead of a handful of very, very large ones, we need a different approach, and that different approach in many respects is going to be DevOps. It’s going to be taking a very hands-on developer approach to traditional operational tasks.

Governor: But security is definitely an elephant stomping around the room. There's no question. The feedback loop around DevOps has not been as fixated on security as it might be.

Quite frankly, developers are about getting things done and this is the constant challenge, ops, security, and so on. Look at Docker. Developers absolutely love it, but it didn’t start in a position of how do we make this the most secure model you could ever have for application deployment.

There are some weird people who started to use the word DevOps(Sec), but there are a lot of unicorns and rainbows and there is going to be a mess that needs clearing up. Security is something that we generally don’t do that well.

On the other hand, as I said, we're less concerned with stability, and on the security side, it does seem like. Look at privacy. We all gave up on that, right?

Gardner: I suppose. Let’s not give up on security though.

Governor: Well, those things go together.

Gardner: They do.

Need to step up

Governor: Certainly, the organizations that would claim to be really good at security are the ones that have been leaving all of their customers' details on a USB stick or on a laptop. The security industry has not done itself many favors. They need to step up as much as developers do.

Gardner: As we close out, maybe we can develop some suggestions for organizations that create a culture for DevOps or put in place the means for DevOps. Again, speaking with a number of users recently, automation and orchestration come to mind. Having that in place means being able to scale, to be able to provide the data back, monitoring, and data from a big-data perspective across systems to pan IT data, and the ability to measure that user experience. Any other thoughts about what you as an organization should put in place that will foster a DevOps mentality?
Learn More About DevOps
Solutions That Unify Development and Operations
To Accelerate Business Innovation
Governor: There are a couple of things. One thing you didn’t mention is pager duty. It's a fact that somebody is going to get called out to fix the thing, and it’s about individuals taking responsibility. With that responsibility, give them a higher salary. That’s an interesting challenge for IT, because they're always told, here are a bunch of tools that enable the Type As to get stuff done.
What’s important is to just get out and start spending time reading the stuff that the web companies are doing and sharing.

As to your point about whether this is a cultural shift or a product shift, the functional areas you mentioned are absolutely right, but as to the culture, just what’s important is to just get out and start spending time reading the stuff that the web companies are doing and sharing.

If you look at Etsy or Netflix, they're not keeping this close to their chest. Netflix, in fact, has provided the tools it uses to improve stability through Chaos Monkey. So there's much more sharing, there's much more there, and the natural thing would be to go to your developer events. They're the people building out this new culture. Embed yourself in this developer aesthetic, where GitHub talks about “optimizing for developer joy." Etsy is about “Engineering Happiness.”

Gardner: Stephen, what should be in place in organizations to foster better DevOps adoption?

O’Grady: It’s an interesting question. The thing that comes to mind for me is a great story from Adrian Cockcroft, who used to be with Netflix. We've talked about him a couple of times. He's now with Battery Ventures, and he gives a very interesting talk, where he goes out and talks to executives and senior technology executives from all of these Fortune 500 companies.

One of the things he get asked over and over and over is, "Where do you find engineers like the ones that work at Netflix? Where do we find these people that can do this sort of miraculous DevOps work?" And his response is, "We hired them from you."

The singular lesson that I would tell all the organizations is that somewhere in your organization probably are the people who know how this stuff works and want to get it done. A lot of times, it’s basically just empowering them, getting out of the way and letting the stuff happen, as opposed to trying to put the brakes on all the time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: