Tuesday, February 16, 2016

SAP Ariba's chief strategy officer on the digitization of business and future of technology

The next BriefingsDirect technology innovation thought leadership discussion focuses on advancements in business applications and the modern infrastructure that supports them, and what that combination portends for the future.

As we enter 2016, the power of business networks is combining with advanced platforms and mobile synergy to change the very nature of business and commerce. We’ll now explore the innovations that companies can expect -- and how that creates new abilities and instant insights -- and how companies can, in turn, create new business value and better ways to reach and serve their customers.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the future of technology and business networks, we’re joined by Chris Haydon, Chief Strategy Officer at SAP Ariba. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Now that we have cloud, big data, and mobile architectures aligned, how does that support where we can go with new business applications and processes -- to get entirely new levels of productivity?

Haydon
Haydon: It's an exciting new age. The new platforms, as you say, and the applications coming together are a kind of creative destructivism. We can start all over again. The value chain is changing because of digitization, and technology needs to respond.

So what you hear are buzzwords like adaptivity, configurability, or whatever, but these are table stakes now for business applications and business networks. This digitization of value chains forces us to think about how we bring the notion of multiple constituents within the organization, in terms of the adoption, and then couple that with the agility they need to do to deal with this constant and increasing rate of change.

Gardner: People are talking more about “digital business.” It means looking at not just new technologies, but how you do business, of taking advantage of the ability to have insight into your business, sharing that insight across ecosystems with partners. Where do you see the real advantage in action now for a business-to-business (B2B) environment?

Outcome-based conversations

Haydon: We hear about the technology and it’s important, but what we really hear about is the outcomes. We have very outcome-based conversations with customers. So how does the platform with the business network give you these differential outcomes?

What's pretty evident is that you have to be closer to your end user. And it's also about the cloud paradigm adoption. You're only as good as your last transaction, your last logon, your last order, or your last report -- or whatever business process you're running in.

It's this merger of adoption and outcome, and how you string these two things together to be able to deliver the customer benefit.

From a technology perspective, it's no longer acceptable just to think about the four walls or your firewall; it's really about that extended value chain. So this is where we're seeing this merger of this business network concept, this commerce network concept in the context of these business applications.

We're really starting to emerge from B2B, and it's grown out of the business-to-consumer (B2C) world. With the Facebooks, the LinkedIns, or the Ubers, now you're seeing leading practice companies needing to embrace these larger value chains or commerce chains to give them the outcome and also to help drive differential adoption.
From a technology perspective, it's no longer acceptable just to think about the four walls or your firewall; it's really about that extended value chain.

Gardner: For organizations that are really attracted to this and recognize that they have to compete with upstarts, if they get this right, it could be very disruptive.

When we think about having all of your data accessible, when we think about processes being automated, at some point you're able to gather more data and analysis and process refinement that you can then reapply to your business, creating perhaps algorithms and abilities to add intelligence in ways that you couldn’t never do manually.

How do we get companies to understand that feedback loop and get it instituted more rigorously into their organization?

Haydon: One thing we see is that with the technology we have today, we can hide that complexity from the users and embed it in the way that end users need to work. Let’s talk a little bit about an SAP Ariba example here. If you're going to create a new sourcing event, do you really want to have to think about the business you do with your current suppliers? Absolutely, but wouldn't it be great when that’s all managed by extra information presented right in front of you?

On top of that, wouldn’t it be also great to know that these three new suppliers in this category, in this geography that you haven't thought about before, and wouldn't it also be great that they could be automatically invited at no extra friction to your process? So you get more supplier diversity. You're able to also let suppliers become more involved in the process, earlier in the process.

We're redistributing this value chain in terms of linking the insight and the community to the point of where work is being done -- and that’s part of that transformation that we're seeing, and that’s how we see it in the Ariba context. But we’re also seeing that in the larger business-network and business application context across SAP.

Knowing your needs

Gardner: So to borrow the B2C example, if Uber is our poster child example, instead of my standing outside of a hotel and having visibility of all of the cars that are driving around that neighborhood, I could be a business and get visibility into all of the suppliers that are available to me. And those suppliers will know what my needs are before they even get to the curb, so to speak.

What's the next step? When we gain that visibility, when we have supply chain and buyer and seller synergy, what comes next? Is there some way to bring that value extended to the end-user at some time?

Haydon: The next step is network resource planning. This is the awareness about your supply base, but also what other stakeholders in that process might mean, and this is what it could be for the end user. It's not just about the supplier, but also about the logistics provider. It’s about how you might have working capital and finance.
The next step is network resource planning. This is the awareness about your supply base, but also what other stakeholders in that process might mean.

What if you could dynamically match or even have a conversation about differential service levels from a buyer or supplier? I'm okay to take it tomorrow if I can get it at 8 a.m., but it's $2 cheaper, or I am happy to take it today because of some other dependencies.

This is a type of dynamic “what if,” because we have the technology platform capability, in time real-time memory analytics, but also in the context of the business process. This is this a next generation capability that we'll be able to get to. Because the network is external to the application, because together we can understand the players in the network in the context of the business process, that's where that real next evolution is going to come.

Gardner: It sounds as if we're really starting to remove the margin of error from business. We're starting to remove excess capacity, become fit for purpose through that dynamic applicability of insight and analysis. How much can we squeeze out? Do we have a sense of how substantial these business network capabilities will become? What sort of payoff are we anticipating when we can remove that margin of error, with tighter integration across ecosystems? What’s the gold piece that we are going after here?

Haydon: Well it’s a big, big, big number. Even if we go back a couple of years -- and there’s some good work being done on just the inefficiencies and the first sort of magnitude on paper -- and that’s just moving something from a paper format and dematerializing that into an electronic format. Four years or five years ago when a study was done on that, that was conservatively somewhere between $600 billion and $1 trillion just in the Global 2000 corporations.

There is an order of magnitude more opportunities globally from just this compression of cycle times, in the broader sense, and responsiveness and adaptability throughout the whole world globally.

At SAP Ariba, we just passed a great threshold in 2015. We ran more than $1 trillion in commerce across our business network. If you just start doing a little bit of math around what a one percent or two percent improvement of that can be from better working capital management, or more flexible working capital management, just pure straight productivity and just competition, of leveling the playing field for the smallest micro-supplier through the largest international supplier, and just leveling that all out. There are stupendous gains on both sides of the balance sheet.

Adoption patterns

Gardner: When it comes to adoption patterns, some organizations may have been conservative and held back, perhaps resisted becoming cloud-based. What sorts of organizations are you seeing making a bold move, not just piecemeal, and what can they get a lot done in a relatively short amount of time?

Haydon: In industries where they are traditionally conservative, they really do need to change their value chains, because that’s what their customers are demanding. And so, financial services, where historically you would think the old “big iron” approach. Those types of companies are embracing what they need to do on cloud to just to be more adaptive, to be faster, and also to be more end-user-friendly, and the total cost of ownership approach from the cloud is really there.

But we're a long way away from on-premises applications being dead. I think what the cloud gives enterprises is they can go largely to the cloud -- and we see companies doing that -- but that the legacy-hybrid, on-premise model is really important. That’s what’s great about the cloud model. You can consume as you go. It doesn’t all have to be one big bang.

For pragmatic CEOs, CFOs, or CIOs, that blend of hybrid is the legitimate strategy -- where they can have the best of both worlds. With that said, the inextricable pool of cloud is there, but it can be a little bit more on their own terms, on what makes sense for their businesses.
We talk about our cloud applications, and we have leading, leading practice, widely, broadly adopted source-to-pay cloud applications in a fully integrated suite.

Gardner: We have been at the 70,000- to 80,000-foot height on this discussion. Let’s bring it down a bit lower. Help our readers understand SAP Ariba as an entity now. What does it consist of in terms of the software-as-a-service (SaaS) services that have been acquired and built, and how that then fits into a hybrid portfolio.

Haydon: Number one, we fundamentally believe in Ariba, and it had to give differential outcomes to our customers, that linking cloud applications with the business-network construct will give you better outcomes for the things we just spoke about earlier in the conversation: visibility, supply chain, adaptability, compliance, building on networks of networks to be able to deliver different outcomes, linking to payment networks like we have done with Ariba and Discovery, linking to content networks like we have done with eBay, but bringing them into the context of the business process can only really be enabled through networks and applications.

From an Ariba perspective, we like to think of it in three pillars for everyone. We talk about our cloud applications, and we have leading, leading practice, widely, broadly adopted source-to-pay cloud applications in a fully integrated suite.

From a cloud perspective as well, you can have the Lego-block approach, where we can take any one of our modules, from spend visibility all the way through the invoicing, and start your journey there, if that's your line-of-business requirement, or take the full suite approach.

Intrinsic to that offering, of course, is our business network. Why I bring that up is that our business network and our cloud applications are agnostic. We don't actually care, from a cloud perspective, which back-end system of record you wish to use.

Of course, we love and we believe that the best out there is S/4HANA from SAP, but there is also a larger market, whether it's the mid-market or whether there are other customers who are on other journeys on the enterprise resource planning (ERP) for legacy reasons. We can connect our cloud applications and our network to any one of those.

Three levels

So, there are three levels: network, our end-to-end cloud applications, and last but not least, and which is really relevant from the technology journey, a rock-solid platform. And so I am moving toward that platform that runs our cloud apps and our network in conjunction with SAP for the security, for the privacy, for the availability, for all of these really important things that enterprise customers need.

Also, you have to have the security to run these business processes, because they're entrusting those to us, and that's really what cloud means. It means entrusting your business processes to us to get a differential outcome for your business.

Gardner: As organizations try to become more of a digital business, they will be looking to bringing these benefits to their ERP-level business applications, their supply chain and procurement, but increasingly, they're also looking to manage better their human resources and recognizing that that's a dynamic marketplace more than ever.

Haydon: Yes.

Gardner: So let's talk about how the business network effect and some of these synergistic benefits come to play in that human resources side of running a digital business?
Leading companies today want to have agility on how many full-time employees they can hire, and how to manage contingent or temporary labor aspects.

Haydon: That's also one of the great parts from an SAP portfolio. I like to think about it two ways. There’s human capital management internal, and there’s human capital management external. Leading companies today want to have agility on how many full-time employees they can hire, and how to manage contingent or temporary labor aspects.

From an SAP perspective, what's great is that we have the leading cloud for Human Resource Management and Talent Management solutions with Success Factors, and we have also have the market-leading Contingent Labor Management solution with Fieldglass.

Together with Ariba, you're able to, one, have a one-visibility view into your workforce in and out, and also, if you like, to orchestrate that procurement process to get sourcing, ordering, requisitioning and payment throughout.

From a company perspective, when you think about your spend profile, 30 percent to 70 percent of the spend is about services as we move to a service-based economy. And in conjunction with SAP Ariba and SAP Fieldglass, we have this broad, deep, end-to-end process, in a single context, and -- by the way -- integrated nicely to the ERP system to really again give those best outcomes.

Gardner: When people think about public clouds that are available to them for business, they often couple that with platform-as-a-service (PaaS), and one of the things that other clouds are very competitive about is portraying themselves as having a very good developer environment. But increasingly, development means mobile apps.

Haydon: Yes.

Mobile development

Gardner: What can we gain from your cloud vision as being hybrid, while also taking advantage of mobile development?

Haydon: From a platform perspective, you need to be “API First” because if you're actually able to expose important aspects within a business process, with an API layer, then you give that extensibility and that optionality to your customers to do more things.

Let’s talk about concrete examples. An end-to-end process could be as simply as you could take an invoice from any third-party provider. Right now, Ariba has an open invoice format. If someone chooses to scan it themselves and digitize it themselves or something that a customer wanted to do, we could take that straight feed in.

If you want to talk about a mobile API, it could be as simple as you want to expose a workflow. There's a large corporate mandate sometimes to have a workflow. If you travel, there's a workflow for your expenses, a workflow for your leave request, and a workflow for your purchase orders. If you want that cost – the end-user that has five systems or would rather come to one, you can have that API level there.

There is this whole balance of how you moleculize your offerings to enable customers to have that level of configuration that they need for their individual business requirements, but still get the leverage of not having to rebuild it all themselves.
That's certainly a fundamental part of our strategy. You'll see that SAP is leading in itself under our HANA Cloud Platform. SAP Ariba is building on that.

That's certainly a fundamental part of our strategy. You'll see that SAP is leading in itself under our HANA Cloud Platform. SAP Ariba is building on that. I don’t want to flag too much, but you’ll see some interesting developments along that way as we open up our platform from both an end-to-end perspective and also from an individual mobile perspective throughout the course of this year.

Gardner: Now this concept of API First, it's very interesting, because it doesn't really matter which cloud it’s coming from, whether on a hybrid spectrum of some sort. It also allows you to look at business services and pull them down as needed and construct processes, rather than monolithic, complex, hairball applications.

Do you have any examples of organizations that have taken advantage of this API First approach? And how have they been able to customize their business processes using this hybrid cloud and visibility, reducing the complexity?

Haydon: I can certainly give you some examples. This just starts from simple processes, but they can actually add a lot of value. For example, you have a straightforward shipping process, an advanced shipping process. We know of an example where a customer took 90 percent of their time out of the receiving and made the matching of their receiving receipting process almost by 95 percent, because they can leverage an API to support their custom bar-coding standard.

So they leveraged the standard business-network bus, because that type of barcode that they need to have in their warehouse, and their part of the world, was there. Let’s wind the clock back three or four years. If we had asked for that specific feature, to be very candid, we wouldn’t make it. But once you start opening up the platform at that micro level, you can actually let customers get the job done.

But they can still leverage that larger framework, that platform, that business process, that cloud that we give them. But when you extend that out for what it could mean, again, full payment, or for risk, or for any of these other dimensions that are just typically organizational processes to the current -- whether it’s procurement or whether it’s HR recruiting or whatever it's like -- it gets pretty exciting.

Big data

Gardner: One of the other hallmarks of a digital business is having aspects of a business work in new ways together, in closeness, that they may not have in the past. And one of the things that’s been instrumental to business applications over the past decades is this notion of a system of record or systems of records, and also, we have had this burgeoning business intelligence (BI) now loosely called big data capability.

And they haven't always been that close, but it seems to me that with a platform like SAP HANA, combined with business-networks, that systems of record and the data and information in them, and the big data capabilities, as well as accessing other data sets outside the organization, make a tremendous amount of sense. How do you see the notion of more data, regardless of its origin, becoming a common value stream to an organization?

Haydon: This becomes the fundamental competency that an organization needs to harness. This notion of the data, and then the data in the context of the business process, and then again to your point, how that’s augmented in the right way, is really the true differentiation for where we’ll go.
We're  working with our customers to identify and remove forced labor in the supply chain, or advance global risk management or even expedited delivery and logistics.

Historically, we laid down the old railway tracks on business processes,  but there is no such thing as railway tracks anymore. You rebuild them every single day. Inside that data, with the timeliness of it, is sentiment analysis so that from a business-network context, it enables you to make different and dynamic decisions.

Within SAP Ariba, we're fundamentally rethinking how we can have the data that’s actually in our environment and how we get that out -- not just to our account managers, not just to where our product manager is, but more importantly, out to our end users. They can then actually start to see patterns, play with it, and create some interesting innovations. We're  working with our customers to identify and remove forced labor in the supply chain, or advance global risk management or even expedited delivery and logistics.

Gardner: Okay, we talked about business-networks in the context of applications working together for efficiency, we’ve talked about the role of hybrid cloud models helping to accelerate that, we've talked about the data issues and some of the development and customization on mobile side of things. What have we missed, what is the whole, greater than the sum of the parts component that we’re not talking about that we should?

Haydon: There are probably two or three. There’s certainly the notion of the user experience and that's a function of mobile, but not mobile only. The notion of reinventing the old traditional flows and thinking that was prevalent even five years ago on what constituted one type of work channel no longer exists.

There's the new discipline of what a user experience is about and that's not just the user interface, that’s also things like they’re just the tone or the content that’s presented to you. It’s also what it does mean on the differential devices and way you’re working. So I think that's an evolving piece, but cannot be left behind.

That's where the B2C world is blazing and that's now the expectation of all of us in that, when we go to work and put our corporate hat on, as simple as that. These two are security and privacy. That is top of mind for a number of reasons and it's really fair to say that it’s in a massive state of flux and change here in the United States, but certainly in Europe. It doesn’t matter which region you are in, APJ or Latin America as well.

Competitive advantage

That's another competitive advantage that enterprises and providers in this space like SAP and SAP Ariba, can, and will, and should, lead on. The last point, maybe a trend, is that you're really seeing very quickly the transition between the traditional service and material flows that exist, and then the financial flows.

We're seeing the digitalization of payments just exploding and banks and financial institutions having to rethink and look at what they're doing. With the technology and the platforms we have, that linking of that is physical flows, whether they be for services or materials and that crossing over to that payment and then the natural working capital because, at the end of that, commerce follows money.

It’s all about the commerce. So it's the whole space in that whole area and that technology is the trend as well. Security UX and the whole payment working capital management or the digitalization of that are the three large things.

Gardner: And these are areas where scale really helps. Global scale, like a company like SAP has, when the issues of data sovereignty come up and you need to think about hybrid cloud, not just in its performance and technical capabilities but the actual location of data, certain data for certain periods of time and certain markets, is very difficult to do if you're not in those markets and understanding those markets. It's the same with the financial side. Because banking and finance are dynamic, it’s different having that breadth and scope, a key component of making that possible as well.
We're seeing the digitalization of payments just exploding and banks and financial institutions having to rethink and look at what they're doing.

There's one last area we can close out on and it’s looking a bit to the future. Some competitors of yours are out there talking about artificial intelligence (AI) more than others, and when you do have network effects as we’ve described, big-data mesh across organization thinking of data as a life cycle for a digital business, not just data in different parts of your organization.

We can think about expertise in vertical industries being brought to bear on insights in the markets and ecosystems. When and how might we expect some sort of an AI, value, API, or set of APIs to come forward to start thinking things through in ways people probably haven’t been able to do up until now or in the near future.

Haydon: The full notion of something like a [2001: A Space Odyssey’s] HAL 9000, is probably a little way away. But then again, what you would see within the next 12 to 18 months is specific -- maybe you call them smart apps rather than intelligent or smart agents.

They already exist today in some areas. You will see them augmented because of feedback from a system that’s not your own, whether it’s moving average price of an inventory. Someone will bring the context of an updated price, or an updated inventory and that will trigger something, and that will be the smart agent going to do all that work for you, but ready for you to make the release.

There still is a notion of the competency, as well, within the organization, not as much a technology thing, but a competency on what Master Data Governance means, and the quality of that data means, and being able to have a methodology to be to manage that to let these systems do it.

So you will see probably in a lower-risk spend categories, at least from a procurement perspective indirect, or may be some travel and these aspects, maybe a little bit of non-inventory materials repair and operating supplies, you probably fair way away from fully releasing direct material supply chain in some of these pretty, pretty important value chains we manage.

Self-driving business process

Gardner: So maybe we should expect to see self-driving business processes before we see self-driving cars?

Haydon: I don't know, I'm lucky enough to live in Palo Alto, and I see a self-driving car three days a week. So we'll back out of that one.

But there is a really important piece, at least from Ariba perspective and an SAP perspective. We fundamentally believe that these business-networks are the rivers of data.

It's not just what's inside your firewall. You will truly get the insight from the largest scale of these rivers of data from these business-networks; whether it be Ariba or our financial partners, or whether it be others. There will be networks of networks.
It's scale and adoption. From the scale and from the adoption, will come that true benefit from the networks, the business process, and the connectivity therein.

This notion of having a kind of the bookend of the process, a registry to make sense of the actors in these business networks and the context of the business process, and then linking that to the financial and payment change, that's where the real intelligence and some real money could be released, and that's some of the thinking that we have out there.

Gardner: So, a very bright interesting future, but in order to get to that next level of value, you need to start doing those blocking and tackling elements around the rivers of information as you say the network effects and putting yourself in a position and then be able to really exploit these new capabilities when they come out.

Haydon: It's scale and adoption. From the scale and from the adoption, will come that true benefit from the networks, the business process, and the connectivity therein.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Thursday, February 11, 2016

How New York Genome Center manages the massive data generated from DNA sequencing

The next BriefingsDirect big-data use case discussion examines how the non-profit New York Genome Center manages and analyzes up to 12 terabytes of data generated each day from its genome sequence appliances. We’ll learn how a swift, cost efficient, and accessible big-data analytics platform supports the drive to better diagnose disease and develop more effective medical treatments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

To hear how genome analysis pioneers exploit vast data outputs to speedily correlate for time-sensitive research, please join me in welcoming our guest, Toby Bloom, Deputy Scientific Director for Informatics at the New York Genome Center in New York City. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: First, tell us a little bit about your organization. It seems like it’s a unique institute, with a large variety of backers, consortium members. Tell us about it.

Bloom
Bloom: New York Genome Center is about two-and-a-half years old. It was formed initially as a collaboration among 12 of the large medical institutions in New York: Cornell, Columbia, NewYork-Presbyterian Hospital, Mount Sinai, NYU, Einstein Montefiore, and Stony Brook University. All of the big hospitals in New York decided that it would be better to have one genome center than have to build 12 of them. So we were formed initially to be the center of genomics in New York.

Gardner: And what does one do at a center of genomics?

Bloom: We're a biomedical research facility that has a large capacity to sequence genomes and use the resulting data output to analyze the genomes, find the causes of disease, and hopefully treatments of disease, and have a big impact on healthcare and on how medicine works now.

Gardner: When it comes to doing this well, it sounds like you are generating an awesome amount of data. What sort of data is that and where does it come from?
Start Your
HPE Vertica
Community Edition Trial Now
Bloom: Right now, we have a number of genome sequencing instruments that produce about 12 terabytes of raw data per day. That raw data is basically lots of strings of As, Cs, Ts and Gs -- the DNA data from genomes from patients who we're sequencing. Those can be patients who are sick and we are looking for specific treatment. They can be patients in large research studies, where we're trying to use and correlate a large number of genomes to find the similarities that show us the cause of the disease.

Gardner: When we look at a typical big data environment such as in a corporation, it’s often transactional information. It might also be outputs from sensors or machines. How is this a different data problem when you are dealing with DNA sequences?

Lots of data

Bloom: Some of it’s the same problem, and some of it’s different. We're bringing in lots of data. The raw data, I said, is probably about 12 terabytes a day right now. That could easily double in the next year. But than we analyze the data, and I probably store three to four times that much data in a day.

In a lot of environments, you start with the raw data, you analyze it, and you cook it down to your answers. In our environment, it just gets bigger and bigger for a long time, before we get the answers and can make it smaller. So we're dealing with very large amounts of data.

We do have one research project now that is taking in streaming data from devices, and we think over time we'll likely be taking in data from things like cardiac monitors, glucose monitors, and other kinds of wearable medical devices. Right now, we are taking in data off apps on smartphones that are tracking movement for some patients in a rheumatoid arthritis study we're doing.
In our environment, it just gets bigger and bigger for a long time, before we get the answers and can make it smaller. So we're dealing with very large amounts of data.

We have to analyze a bunch of different kinds of data together. We’d like to bring in full medical records for those patients and integrate it with the genomic data. So we do have a wide variety of data that we have to integrate, and a lot of it is quite large.

Gardner: When you were looking for the technological platforms and solutions to accommodate your specific needs, how did that pan out? What works? What doesn’t work? And where are you in terms of putting in place the needed infrastructure?

Bloom: The data that comes off the machines is in large files, and a lot of the complex analysis we do, we do initially on those large files. I am talking about files that are from 150 to 500 gigabytes or maybe a terabyte each, and we do a lot of machine-learning analysis on those. We do a bunch of Bayesian statistical analyses. There are a large number of methods we use to try to extract the information from that raw data.
Start Your
HPE Vertica
Community Edition Trial Now
When we've figured out the variance and mutations in the DNA that we think are correlated with the disease and that we were interested in looking at, we then want to load all of that into a database with all of the other data we have to make it easy for researchers to use in a number of different ways. We want to let them find more data like the data they have, so that they can get statistical validation of their hypotheses.

We want them to be able to find more patients for cohorts, so they can sequence more and get enough data. We need to be able to ask questions about how likely it is, if you have a given genomic variant, you get a given disease. Or, if you have the disease, how likely it is that you have this variant. You can only do that if it’s easy to find all of that data together in one place in an organized way.

So we really need to load that data into a database and connect it to the medical records or the symptoms and disease information we have about the patients and connect DNA data with RNA data with epigenetic data with microbiome data. We needed a database to do that.

We looked at a number of different databases, but we had some very hard requirements to solve. We were looking for one that could handle trillions of rows in a table without failing over, tens of trillions of rows without falling over, and to be able to answer queries fast across multiple tables with tens of trillions of rows. We need to be able to easily change and add new kinds of data to it, because we're always finding new kinds of data we want to correlate. So there are things like that.

Simple answer

We need to be able to load terabytes of data a day. But more than anything, I had a lot of conversations with statisticians about why they don’t like databases, about why they keep asking me for all of the data in comma-delimited files instead of databases. And the answer, when you boiled it down, was pretty simple.

When you have statisticians who are looking at data with huge numbers of attributes and huge numbers of patients, the kinds of statistical analysis they're doing means they want to look at some much smaller combinations of the attributes for all of the patients and see if they can find correlations, and then change that and look at different subsets. That absolutely requires a column-oriented database. A row-oriented relational database will bring in the whole database to get you that data. It takes forever, and it’s too slow for them.

So, we started from that. We must have looked at four or five different databases. Hewlett Packard Enterprise (HPE) Vertica was the one that could handle the scale and the speed and was robust and reliable enough, and is our platform now. We're still loading in the first round of our data. We're still in the tens of billions of rows, as opposed to trillions of rows, but we'll get there.
We must have looked at four or five different databases. Vertica was the one that could handle the scale and the speed and was robust and reliable enough and is our platform now.

Gardner: You’re also in the healthcare field. So there are considerations around privacy, governance, auditing, and, of course, price sensitivity, because you're a non-profit. How did that factor into your decision? Is the use of off-the-shelf hardware a consideration, or off-the-shelf storage? Are you looking at conversion infrastructure? How did you manage some of those cost and regulatory issues?

Bloom: Regulatory issues are enormous. There are regulations on clinical data that we have to deal with. There are regulations on research data that overlap and are not fully consistent with the regulations on clinical data. We do have to be very careful about who has access to which sets of data, and we have all of this data in one database, but that doesn’t mean any one person can actually have access to all of that data.

We want it in one place, because over time, scientists integrate more and more data and get permission to integrate larger and larger datasets, and we need that. There are studies we're doing that are going to need over 100,000 patients in them to get statistical validity on the hypotheses. So we want it all in one place.

What we're doing right now is keeping all of the access-control information about who can access which datasets as data in the database, and we basically append clauses to every query to filter down the data to the data that any particular user can use. Then we'll tell them the answers for the datasets they have and how much data that’s there that they couldn’t look at, and if they needed the information, how to go try to get access to that.

Gardner: So you're able to manage some of those very stringent requirements around access control. How about that infrastructure cost equation?

Bloom: Infrastructure cost is a real issue, but essentially, what we're dealing with is, if we're going to do the work we need to do and deal with the data we have to deal with, there are two options. We spend it on capital equipment or we spend it on operating costs to build it ourselves.

In this case, not all cases, it seemed to make much more sense to take advantage of the equipment and software, rather than trying to reproduce it and use our time and our personnel's time on other things that we couldn’t as easily get.

A lot of work went into HPE Vertica. We're not going to reproduce it very easily. The open-source tools that are out there don’t match it yet. They may eventually, but they don’t now.

Getting it right

Gardner: When we think about the paybacks or determining return on investment (ROI) in a business setting, there’s a fairly simple straightforward formula. For you, how do you know you’ve got this right? What is it when you see certain, what we might refer to in the business world as service-level agreements (SLAs) or key performance indicators (KPIs)? What are you looking for when you know that you’ve got it right and when you’re getting the job done, based all of its requirements and from all of these different constituencies?

Bloom: There’s a set of different things. The thing I am looking for first is whether the scientists who we work with most closely, who will use this first, will be able to frame the questions they want to ask in terms of the interface and infrastructure we’ve provided.

I want to know that we can answer the scientific questions that people have with the data we have and that we’ve made it accessible in the right way. That we’ve integrated, connected and aggregated the data in the right ways, so they can find what they are looking for. There's no easy metric for that. There’s going to be a lot of beta testing.
The place where this database is going to be the most useful, not by any means the only way it will be used, is in our investigations of common and complex diseases, and how we find the causes of them and how we can get from causes to treatments.

The second thing is, are we are hitting the performance standards we want? How much data can I load how fast? How much data can I retrieve from a query? Those statisticians who don’t want to use relational databases, still want to pull out all those columns and they want to do their sophisticated analysis outside the database.

Eventually, I may convince them that they can leave the data in the database and run their R-scripts there, but right now they want to pull it out. I need to know that I can pull it out fast for them, and that they're not going to object that this is organized so they can get their data out.

Gardner: Let's step back to the big picture of what we can accomplish in a health-level payback. When you’ve got the data managed, when you’ve got the input and output at a speed that’s acceptable, when you’re able to manage all these different level studies, what sort of paybacks do we get in terms of people’s health? How do we know we are succeeding when it comes to disease, treatment, and understanding more about people and their health?

Bloom: The place where this database is going to be the most useful, not by any means the only way it will be used, is in our investigations of common and complex diseases, and how we find the causes of them and how we can get from causes to treatments.

I'm talking about looking at diseases like Alzheimer’s, asthma, diabetes, Parkinson’s, and ALS, which is not so common, but certainly falls in the complex disease category. These are diseases that are caused by some combinations of genomic variance, not by a single gene gone wrong. There are a lot of complex questions we need to ask in finding those. It takes a lot of patience and a lot of genomes, to answer those questions.
Start Your
HPE Vertica
Community Edition Trial Now
The payoff is that if we can use this data to collect enough information about enough diseases that we can ask the questions that say it looks like this genomic variant is correlated with this disease, how many people in your database have this variant and of those how many actually have the disease, and of the ones who have the disease, how many have this variant. I need to ask both those questions, because a lot of these variants confer risk, but they don’t absolutely give you the disease.

If I am going to find the answers, I need to be able to ask those questions and those are the things that are really hard to do with the raw data in files. If I can do just that, think about the impact on all of us? If we can find the molecular causes of Alzheimer’s that could lead to treatments or prevention and all of those other diseases as well.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Monday, February 1, 2016

Microsoft sets stage for an automated hybrid cloud future with Azure Stack Technical Preview

Last week’s arrival of the Microsoft Azure Stack Technical Preview marks a turning point in the cloud-computing market and forms a leading indicator of how dramatically Microsoft has changed in the past two years.

The cloud turning point comes because the path to hybrid-cloud capabilities and benefits has a powerful new usher, one with the enterprise, developer, and service-provider presence, R and D budget, and competitive imperative to succeed in a market still underserved and nebulous.

Over the past five years, public cloud infrastructure-as-a-service (IaaS) value and utility have matured rapidly around three major players: Amazon Web Services, Google Cloud Platform, and Microsoft Azure. But hybrid-cloud infrastructure standards are still under-developed, with no dominant market driver.
The best path to a hybrid cloud global standard remains a vision only, fragmented in practice, and lacking a viable commercial beachhead.

OpenStack, Apache CloudStack, Cloud Foundry, Eucalyptus, vCloud Air — none is dominant, none forms an industry standard with critical mass at either the enterprise or service-provider levels. The best path to a hybrid cloud global standard remains a vision only, fragmented in practice, and lacking a viable commercial beachhead.

Right now, it’s hard for enterprise IT architects to place a major bet on their hybrid cloud strategy. Yet placing major bets on the best path to effective hybrid cloud capabilities is exactly what enterprise IT architects should be doing as soon as possible.

Instead of a clear private-to-public cloud synergy strategy, IT organizations are fretting over whether to take a cloud-first or mobile-first approach to their apps, data and development. They want to simultaneously modernize legacy apps, rationalize their data, give their organizations a DevOps efficiency, find comprehensive platform-as-a-service (PaaS) simplicity, and manage it all securely.  They know that hybrid cloud is a big part of all of these, yet they have no clear direction.

API first

The right way to approach the problem, says my friend Chris Haydon, chief Strategy Officer at SAP Ariba, is to resist cloud-first and mobile-first, and instead take the higher abstraction API-first approach to as many aspects of IT as possible. He’s right, and SAP’s own success at cloud models — particularly SaaS and big data as a service — is a firm indicator. [Disclosure: SAP Ariba is a sponsor of my BriefingsDirect podcasts.]

With Microsoft Azure Stack (MAS), the clear direction of the future of cloud is of an API-first and highly automated private-cloud platform that has full compatibility with a major public cloud, Microsoft Azure. Like public Azure, private Azure Stack supports workloads from many tools and platforms — including Linux and Docker — and, as a cloud should, fires up hypervisors to run any major virtual machine supported workload.

Sensing an integrated private cloud platform opportunity big enough to drive a truck through, Microsoft has developed MAS to be highly inclusive, with a unified application model around Azure Resource Manager. Using templates typically found on GitHub, MAS operators can rapidly and simply create powerful private cloud resources to support apps and data. Because it’s Azure-consistent, they also allow ease in moving those workloads to a public cloud. This is not a Windows Server or .NET abstraction, its a private-cloud abstraction, with an API-first approach to management and assembly of data centers on the fly.
Because it’s Azure-consistent, they also allow ease in moving those workloads to a public cloud.

Because MAS is built on software-defined data center (SDDC) principles and technologies, it requires modern and robust, albeit industry standard, commodity hardware. Converged and hyper-converged infrastructure models, then work very well to rapidly deploy MAS private clouds on-premises as appliances, racks, blocks, and allow cost and capacity visibility and planning to align the hardware side with the API-first model on the software infrastructure. Indeed, the API-first and converged infrastructure models together are highly compatible, synergistic.

Hewlett Packard Enterprise (HPE) clearly has this hardware opportunity for enterprise private cloud in mind, recognizes the vision for "composable infrastructure," and has already partnering with Microsoft at this level. [Disclosure: HPE is a sponsor of my BriefingsDirect podcasts.]

Incidentally, the MAS model isn’t just great for new and old server-based apps, but it’s a way to combine that with desktop virtualization and to deliver the full user experience as a service to any desktop or mobile endpoint. And the big-data analytics across all database, app stores, and unstructured data sources can be integrated well into the server and desktop apps cloud.

Dramatic change

And this is why MAS portends the dramatic change that Microsoft has undergone. Certainly MAS and the Azure hybrid cloud roadmap suit Microsoft’s installed base and therefore Windows legacy. There is a compelling path from Windows Server, Microsoft SaaS apps and Exchange, .NET, and Visual Studio to Azure and MAS. There is a way to rationalize all Microsoft and standard data across its entire lifecycle. But there is also a customer-focused requirements list that allows for any client endpoint support, and an open mobile apps development path. There is a path for any enterprise app or database on a hypervisor to and from MAS and Azure. There are attractive markets for ISVs, service providers, and IT integrators and support providers. There is a high desirable global hardware market around the new on-premises and cloud provider hardware configurations to support SDDC modernization and MAS.

Clearly, Amazon Web Services and its stunning public-cloud success has clarified Microsoft's thinking around customer-centric market focus and more open IT systems design. But the on-premises data center, when made efficient via SDDC and new hardware, competes well on price against public cloud over time. And private cloud solves issues of data sovereignty, network latency, control, and security.

But to me, what makes Microsoft Azure Stack such a game-changer is the new path toward an automated hybrid-cloud future that virtually no other vendor or cloud provider is in a better position than Microsoft to execute. Google, Amazon, even IBM, are public-cloud biased. The legacy software IT vendors are on-premises-biased. Pretty much only Microsoft is hybrid biased and on the path to API-first thinking removes the need for enterprise IT to puzzle over the hybrid cloud boundary. It will become an automated boundary, but only with a truly common hybrid cloud management capability.

Because when you take the step toward API-first IT and find a common hybrid cloud model designed for the IT as a service future, then all the major IT constituencies — from dev to ops to CISO to business process architects to integrators to users — focus on the services. Just the services.
Once IT focuses on IT as a service, the target for deployment can be best managed programmatically, based on rules and policies.

Once IT focuses on IT as a service, the target for deployment can be best managed programmatically, based on rules and policies. Eventually managing the best dynamic mix of on-prem and public-cloud services can be optimized and automated using algorithms, compliance dictates, cost modeling, and experience. This hybridization can be extended down to the micro-services and container levels. but only if the platform and services are under a common foundation.

In IT, architecture is destiny. And business supported as management of services, with the infrastructure abstracted away, is the agile Digital Business that innovates and analyzes better than the competition.

A common set of API and data services that spans on-premises and public clouds is essential for creating hybrid clouds that support and propel business needs. With the right cloud model in place, IT leaders gain the freedom to acquire, deploy and broker all the services any business needs. IT becomes the source of business value streams, while the hybrid cloud supports that.

API-first private cloud instances on converged infrastructure with automated hybrid cloud services management is the obvious future. Too bad there have been so few clear paths to attainment of this end-game.

It just now looks like Microsoft is poised to get there first and best. Yes, it really is a new Microsoft.

[Disclosure: Microsoft defrayed travel expenses for me to attend a recent Microsoft Azure Stack workshop and industry analyst gathering.]

You may also be interested in:

Monday, January 18, 2016

Procurement in 2016—The supply chain goes digital

The next BriefingsDirect business innovation thought leadership discussion focuses on the heightened role and impact of procurement as a strategic business force.

We'll explore how intelligent procurement is rapidly transforming from an emphasis on cost savings to creating new business value and enabling supplier innovations.

As the so-called digital enterprise adapts to a world of increased collaboration, data access, and business networks, procurement leaders can have a much bigger impact, both inside and outside of their companies.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the future of procurement as a focal point of integrated business services we’re joined by Kurt Albertson, Principal of Advisory Services at The Hackett Group in Atlanta, and Dr. Marcell Vollmer, Chief Operating Officer at SAP Ariba and former Chief Procurement Officer at SAP. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We're looking at mobile devices being used more and more for business. We have connected business networks. How are these trends impacting procurement, and why is procurement going to have a bigger impact as time goes on?

Vollmer: I see a couple of disruptive trends, which are very important and are directly impacting procurement.

Vollmer
We see how smartphones and tablets have changed the way we work on a daily basis, not to forget big data, Internet of Things (IoT), Industry 4.0. So, there are a lot of technology trends out there that are very important.

On the other side, we also see completely new business models taking off. Uber is the largest taxi company without owning a single cab. Airbnb is basically the same, the largest accommodation provider, but not owning a single bed. We see also companies like WhatsApp, Skype, and WeChat. They don't own the infrastructure anymore, like what we know from the past.

I could mention a couple more, like Alibaba. Everybody knows it was the highest IPO in history, with an initial market capitalization of around $230 billion, and they even don’t have an inventory. What we're seeing are fundamental changes, the technology on one side and then the new business models.

We now see the impact here for procurement. When business models are changing, procurement also needs to change. Companies intend to simplify the way they do business today.

Complex processes

We see a lot of complex processes. We have a lot of complex business models. Today it needs to be "Apple easy" and "Google fast." This is simply what millennials expect in the market.

But also, we see that procurement, as a function itself, is transforming from a service to function. And this is definitely one trend. We see a different strategic impact. What is asked of procurement from the lines of business is more important and is on the agenda for the procurement function.

Let me add one last topic, the evolution of the Chief Procurement Officer (CPO) role, by saying that seeing the different trends in the market, seeing also the different requirements indicated by the trends for procurement, the role of procurement, as well as the CPO role in the 21st Century will definitely change.

I believe that the CPO role might evolve and might be a Chief Collaboration Officer role. Or, in the future, as we see the focus is more and more on the business value, a Chief Value Officer role might be the next big step.

Gardner: Kurt, we're hearing a lot from Marcell about virtual enterprises. When we say that a major retailer doesn’t have an inventory, or that a hotel rooms coordinator doesn’t have any beds, we're really now talking about relationships. We're talking about knowledge rather than physical goods. Does that map in some way to the new role of the CPO? How has the virtual enterprise impacted the procurement process?

Albertson: Marcell brought up some great points. Hackett is a quantitative-based organization. Let me share with you some of the insights from a very recent Key Issues Study that we did for 2016. This is a study we do each year, looking forward across the market. We're usually talking with the head of procurement about where the focus is, what’s the priority, what’s going to have the biggest impact on success, and what capabilities they're building out.

Albertson
Let me start at a high level. A lot of things that Marcell talked about in terms of elevating procurement’s role, and more collaboration and driving more value, we saw it quite strongly in 2015 -- and we see it quite strongly in 2016.

In 2015, when we did our Key Issues Study, the number one objective of the procurement executive was to elevate the role of procurement to what we called a trusted adviser, and certainly you've heard that term before.

We actually put a very solid definition around it, but achieving the role of a trusted adviser, in itself, is not the end-game. It does allow you to do other things, like reduce costs, tap suppliers for innovation, and become more agile as an organization, which was in the top five procurement objectives as well.

Trusted advisor

So when we look at this concept of the trusted adviser role of procurement, just as Marcell said, it's about a lot of the procurement executives across multiple industries who are asking, "How do we change the perception of procurement within the eyes of the stakeholders, so that we can do more higher value type activities?"

For example, if you're focusing on cost, we talk a lot about the quantity of spend influence, versus the quality of spend influence. In fact, in our forum in October, we had a very good discussion on that with our client base.

We used to measure success of the procurement organization by cost savings, but one of the key metrics a lot of our clients would look at is percent of spend influenced by procurement. We have a formal definition around that, but when you ask people, you'll get a different definition from them in terms of how they define spend influence.

What we've realized is that world-class organizations are in the 95 percent range and 90 percent plus on the indirect side. Non world-class procurement organizations are lagging, in the 70 percent range in terms of influence. Where do we go from here? It has to be about the quality of the spend influence.
When we look out in the market, there are a lot of companies that don't have line-item level detail or they don't have 90 percent or 95 percent-plus data quality with respect to spend analytics.

And what our data shows very clearly is that world-class organizations are involved during the requirements and planning stages with their internal stakeholders much more often than non-world-class organizations. The latter are usually involved either once the supplier has been identified, or for the most part, once requirements are identified and the stakeholder already knows what they want.

In both cases, you're influencing. But in the world-class case, you're doing a much better job of quality of influence, and you can open up tremendous amounts of value. It changes the discussion with your internal stakeholders from, "We're here to go out and competitively bid and help you get the best price," to, "Let’s have a conversation with what you're trying to achieve and, with the knowledge, relationships, and tool sets that we have around the supply markets and managing those supply markets, let us help you get more value in terms of what you are trying to achieve."

We've asked some organizations how we become a trusted adviser, and we've built some frameworks around that. One of the key things is exactly what you just talked about. In fact, we did a forward-looking, 10-year-out procurement 2025 vision piece of research that we published a few months ago, and big data and analytics were key components of that.

When we look at big data, like a lot of the things Marcell already talked about, most procurement groups aren’t very good at doing basic spend analytics, even with all the great solutions and processes that are out there. Still, when we look out in the market, there are a lot of companies that don't have line-item-level detail, or they don't have 90 percent or 95 percent-plus data quality with respect to spend analytics.

We need to move way beyond that for procurement to really elevate its role within the organization. We need to be looking at all of the big data that’s out there in the supply networks, across these supply networks, and across a lot of other sources of information. You have PDAs and all kinds of information.

We need to be constructively pulling that information together in a way that then allows us to marry it up with our internal information, do more analysis with that, synthesize that data, and then turn it over and provide it to our internal stakeholders in a way that's meaningful and insightful for them, so that they can then see how their businesses are going to be impacted by a lot of the trends out in the supply markets.

Transformational impact

This year, we asked a question that I thought was interesting. We asked which trends will have the greatest transformational impact on the way procurement performs its job over the next decade. I was shocked. Three out of the top five have to do with technology: predictive analytics and forecasting tools, cloud computing and mobility, the global economy and millennial workforce.

Mobility, predictive analytics, forecasting, and cloud computing are in the top five, along with global economy and the millennial workforce, two other major topics that were in our forward-looking procurement 2025 paper.

When we look at the trend that’s going to have the greatest transformational impact, it's predictive analytics and forecasting tools in terms of how procurement performs its job over the next 10 years. That’s big.

Consider the fact that we aren’t very good at doing the basics around spend analytics right now. We're saying that we need to get a lot better to be able to predict what’s going to happen in the future in terms of budgets, based on what we expect to happen in supply markets and economies.
We need to put in the hands of our stakeholders toolsets that they can then use to look at their business objectives and understand what’s happening in the supply market and how that might impact it in two to three years.

We need to put in the hands of our stakeholders tool sets that they can then use to look at their business objectives and understand what’s happening in the supply market and how that might impact it in two to three years. That way, when you look at some of the industries out there, when your revenue gets cut in more than half almost within a year, you have a plan in place that you can then go execute on to take out cost in a strategic way as opposed to just taking a broad axe and trying to take out that cost.

Vollmer: I couldn’t agree more what Kurt said about the importance of the top priorities today. It's very important also to ask what you want to do with the data. First of all, you need technology. You need to get access to all the different sources of information that you have in a company.

We see today how difficult it is. I could echo what Kurt said about the challenges. A lot of procurement functions aren't even capable of getting the basic data to drive procurement, to do spend analytics, and then to see that it really links this to supply-chain data. In the future this will definitely change.

Good time to purchase

When you think about what you can do with the data by predictive analytics and then say, "This is a good time to buy, based on the cycle we've seen is this time-frame." This would give you a good time to make a purchase decision and go to the market.

And what do you need to do that? You need the right tools, spend visibility tools, and access to the data to drive end-to-end transparency on all the data what you have, for the entire source-to-pay process.

Gardner: Another thing that we're expecting to see more of in 2016 is collaboration between procurement inside an organization and suppliers -- finding new ideas for how to do things, whether it’s marketing or product design.

Kurt, do you have any data that supports this idea that this is not just a transaction, that there is, in fact, collaboration between partners, and that that can have quite an impact on the role and value that the procurement officer and their charges bring back to their companies?
That helps procurement category managers raise their game and really be perceived as adding more value, becoming this trusted advisor.

Albertson: Let me tie it into the conversation that we've been having. We just talked about a lot of data and analytics and putting that in the hand of procurement folks, so that they can then go and have conversations and be really advisers in terms of helping enable business strategies as opposed to just looking at historical spend cost analysis, for example. That helps procurement category managers raise their game and really be perceived as adding more value, becoming this trusted adviser.

Hackett Group works with hundreds of Global 1000 organizations, and probably still one of the most common discussions we have, and even in on-site training support that we do, is around strategic category management. It's switching the game from strategic sourcing, which we view as an end-step process that results in awarding a competitive bid process, with aggregation of spend and awarding a contract, to a more formal category management framework.

That provides a whole set of broader value levers that you can pull to drive value, including supplier relationship management (SRM), which includes working with suppliers to innovate, impacting a much broader set of value objectives that our stakeholders have, including spend cost reduction, but not only including spend cost-reduction.

We see such a level of interesting category management today. In our Key Issues Study in 2016, when we look at the capability building that organizations are rolling out, we've been seeing this shift from strategic sourcing to category management.

Strategic sourcing as a capability was always number one. It still is, but now number two is this category management framework. Think of those two as bookends, with category management being a much more mature framework than just strategic sourcing.

Category management

Some 80 percent of companies said category management is a key capability that they need to use to drive procurement’s objectives, and that’s because they're impacting a broader set of value objectives.

Now, the value levers they're pulling are around innovation and SRM. In fact, if you look at our 2016 Key Issues Study again, tapping supplier innovation is actually a little bit further on down the list, somewhere around 10.

When we look at all the things that are there, it’s actually ninth on the list, with 55 percent of procurement executives saying it’s a critical and major importance for us.

The interesting thing, though, is that if you go back to 2015 and compare where that is versus 2016, in 2016, that moves nearly into the top three with respect to the significantly more focus on a key capability. SRM has been a hot topic for our clients for a long time, but this tells us that it’s getting more and more important.

We're seeing a lot of organizations still with very informal SRM, supply innovation frameworks, in place. It’s done within the organization, but it’s done haphazardly by individuals within the business and by key stakeholders. A lot of times, that activity isn't necessarily aligned with where it can drive the most value.
We have to rethink how we look at our supply base and really understand where those suppliers are that can truly move the needle on supplier innovation.

When we work with a company, it's quite common for them to say, "These are our top five suppliers that we want to innovate with." And you ask, "If innovation is your objective, either to drive cost reduction or to help improve the market effectiveness of your products or services and drive greater revenue, whatever the reason you are doing that, are these suppliers going to get you there?"

Probably 7 out of 10 times, people come back to us and say that they picked these suppliers because they were the largest spend impact suppliers. But when you start talking about supplier innovation, they freely admit that there's no way that supplier is going to engage with them in any kind of innovation.

We have to rethink how we look at our supply base and really understand where those suppliers are that can truly move the needle on supplier innovation and engage them through a category-management framework that pulls the value lever of SRM and then track the benefits associated with that.

And as I said, looking at our 2016 Key Issues Study, supplier innovation was the fastest growing in terms of its focus objective that we saw when we asked the procurement executives.

Gardner: Marcell, back to you. It sounds as if the idea of picking a supplier is not just a cost equation, but that there is a qualitative part to that. How would you automate and scale that in a large organization? It sounds to me like you need a business network of some sort where organizations can lay out much more freely what it is that they're providing as a service, and then making those services actually hook up -- a collaboration function.

Is that something you're seeing at Ariba, as well that the business network, helping procurement move from a transaction cost equation to a much richer set of services?

Key role

Vollmer: Business networks play a key role for us for our business strategy, but also on how to help companies to simplify their complexity.

When you reach out to a marketplace, you're looking for things. You're probably also starting discussions and getting additional information. You're not necessarily looking for paint in the automotive industry or the color of a car. Why not get an already painted car as a service at the end?

This is a very simple example, but now think about when you go to the next level on how to evolve and have a technology partnership, where you reach out to suppliers, looking for new suppliers, by getting more and more information and also asking others who have probably having already done similar things.

When you do this on a network, you get probably responses from suppliers you wouldn't even have thought about having capabilities like that. This is a process that, in the future, will continue to aid successfully the transformation to a more value-focused procurement function, and simplicity is definitely a key.
You need to run simple. You need to focus on your business, and you need to get rid of the complexity.

You need to run simple. You need to focus on your business, and you need to get rid of the complexity. You can’t have all the information and do everything on your own. You need to focus on your core competencies and help the business in getting whatever they need to be successful, from the suppliers out in the market to ensure you get the best price for the desired quality, and ensure on-time deliveries.

The magic triangle of procurement is not a big secret in the procurement world. Everybody knows that it's not possible to optimize everything. Therefore, you need to find the right mix. You also need to be agile to work with suppliers in a different way by not only focusing just on the price, which a lot of operational technical procurement functions are used to. You need what you really want to achieve as a business outcome.

On a network you can get help from suppliers, from the collaboration side also, in finding the right ones to drive business value for your organization.

Gardner: Another major area where we're expecting significant change in 2016 is around the use of procurement as a vehicle for risk reduction. So having this visibility using networks -- elevating the use of data analysis, everything we have talked about, in addition to cost-efficiencies, in addition to bringing innovation to play between suppliers and consumers at the industrial scale -- it seems to me that we're getting insight deeply into supply chains and able to therefore head off a variety of risks. These risks can be around security, around the ability to keep supply chains healthy and functioning, and even unknown factors could arise that would damage even an entire company's reputation.

Kurt, do you have some data, some findings that would illustrate or reinforce this idea that procurement as a function, and CPOs in particular, can play a much greater role in the ability to detect risk and prevent bad things from happening to companies?

Supply continuity risk

Albertson: Again, I'll go back to the 2016 Key Issues Study and talk about objectives. Reducing supply continuity risk is actually number six on the list, and it’s a long list, and that’s pretty important.

A little bit further down, we see things like regulatory noncompliance risk, which is certainly core. It's certainly more aligned with certain industries than others. So just from our perspective, we see this as certainly number six on the list of procurement 2016 objectives, and the question is what we do about it.

There's another objective that I talked about earlier, which is to improve agility. It's actually number four on the list for procurement 2016 objectives.

I look at risk management and procurement agility going hand in hand. The way data helps support that is by getting access to better information, really understanding where those risks are, and then being able to quickly respond and hopefully mitigate those risks. Ideally, we want to mitigate risks and we want to be able to tap the suppliers themselves and the supply network to do it.

In fact, we attacked this idea of supply risk management in our 2025 procurement study. It’s really about going beyond just looking at a particular supplier and looking at all the suppliers that are out there in the network, their suppliers, their suppliers, and so on.

But then, it's also tapping all the other partners that are participating in those networks, and using them to help support your understanding and proactively identifying where risk might be occurring, so that you can take action against it.
How do we manage and analyze all this data? How do we make sense of it? That's where we see a lot of our clients struggling today.

It’s one of the key cornerstones of our 2025 research. It's about tapping supplier networks and pulling information from those networks and other external sources, pulling that information into some type of solution that can help you manage and analyze that information, and then presenting that to your internal stakeholders in a manner that helps them manage risk better.

And certainly, an organization like SAP Ariba is in a good position to do that. That’s obviously one of the major barriers with this big-data equation. How do we manage and analyze all this data? How do we make sense of it? That's where we see a lot of our clients struggling today.

We have had some examples of clients that have built out an SRM group inside their procurement organization as a center-of-excellence capability purely to pull this information that resides out in the market, whether it’s supplier market intelligence or information flowing from networks and other network partners. Marrying that information with their internal objectives and plans, and then synthesizing that information, lets them put that information in the hands of category managers.

Category managers can then sit down with business leaders and have fact-based opinions about what’s going to happen in those markets from a risk perspective. We could be talking about continuity of supply, pricing risks and the impact on profitability, or what have you. Whatever those risks are, you're able to use that information. It goes back to elevating the roles of trusted advisor. The more information and insight you can put into their hands the better.

The indirect side

Obviously, when we look at some of the supply networks, there's a lot of information that can be gleaned out there. Think about different buyers that are working with certain suppliers in getting information to them on supply risk performance. To be frank, a lot of organizations still don’t do a great job on the indirect side.

There are opportunities, and we're seeing it already in some of these markets for supply networks to start with the supplier performance piece of this, tap the network community to provide insight to that, and get help from a risk perspective that can be used to help identify where opportunities to manage risk better might occur.

But there are a lot of other sources of information and it’s really up to procurement to try to figure this out with all the sources of big data. Whether it’s sensor data, social data, transactional data, operational data, partner data, machine-to-machine (M2M) data, or cloud services based data, there's a lot of information. We have a model that looks at this kind of these three levels of kind of this analytics model.

The first level of the model is just for recording things and generating reports. The second level is that you're understanding and generating information that then can be used for analytics. Third, you're actually anticipating. You have intelligence and you're moving towards more real-time analytics so that you can be quicker in responding to potential risk.
Procurement organizations need to ensure that they really help the business as much as possible, and also evolve to the next level for their own procurement functions.

I mentioned this idea of agility as being key on the procurement executive’s list. Agility can be in many things, but one of the things that it means with respect to risk is that you can’t avoid every risk event. Some risk events are going to happen. There's nothing you're going to do about them, but you can proactively make plans for when those risk events do occur, so that you have a well thought-out plan based on analytics to execute in order to minimize the impact of that risk.

Time and time again, when we look at case studies and at the research that’s out there, those organizations that are much more agile in terms of responding to these risks where you're not going to be able to avoid them, minimize the impact of those risks significantly compared to others.

Gardner: As we look ahead to 2016, we're certainly seeing a lot on the plate for the procurement organization. It looks like they're facing a lot more technology issues, they're facing change of culture, they're thinking about being a networked organization. Marcell, how do you recommend that procurement professionals prepare themselves? What would you recommend that they do in order to meet these challenges in 2016? How can they be ready for such a vast amount of change?

Vollmer: Procurement organizations need to ensure that they really help the business as much as possible, and also evolve to the next level for their own procurement functions. Number one is that procurement functions need to see that they have the right organizational setup in place. That setup needs to fit the overall organizational line of business spectra, what a company has.

The second component, which I think is very important, is to have an end-to-end focus on the process side. Source-to-pay is a clearly defined term, but it's a little bit different in all the companies. When you really want to optimize, when you really want to streamline your process, you want to use business networks and strategic sourcing tools, as well as running in a highly automated level of transaction to leverage the automation potential of what you have in a purchase order or invoice automation, for example.

One defined process

Then, you need to ensure that you have one defined process and you need to have side systems covering all the different parts of the process. This needs to be highly integrated, as well as integrated in your entire IT landscape.

Finally, you need to also consider change management. This is a most important component by which you help the buyers in your organization transform and evolve to the next level into a more strategic procurement function.

As Kurt said about the data, if you don’t have some basic data, you're very far away from driving predictive analytics and prescriptive guidance. Therefore, you need to ensure that you invest also in your talents and that you drive to change management side.

These are the three components that I would see in 2016. This sounds easy, but I've talked to a lot of CPOs. This journey might take a couple of years, but procurement doesn't have a lot of time. We need to see now in procurement that we define the right measures, the right actions, to ensure that we can help the business and also create value.

As was already mentioned, this needs to go beyond just creating procurement savings. I believe that this concept is here to stay in the future. I think the value is what counts, what you can create.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in: