Friday, July 8, 2016

ChainLink analyst on how cloud-enabled supply chain networks drive companies to better manage finances, procurement

The next BriefingsDirect business innovation thought leadership discussion focuses on how companies are exploiting advances in procurement and finance services to produce new types of productivity benefits.

We'll now hear from a leading industry analyst on how more data, process integration, and analysis efficiencies of cloud computing are helping companies to better manage their finances in tighter collaboration with procurement and supply-chain networks. This business-process innovation exchange comes in conjunction with the Tradeshift Innovation Day held in New York on June 22, 2016.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. 

To learn more about how new trends are driving innovation into invoicing and spend management, we're joined by Bill McBeath, Chief Research Officer at ChainLink Research in Newton, Mass. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's going on in terms of disruption across organizations that are looking to do better things with their procurement-to-payment processes? What is it that's going on, that's focusing them more on this? Why is the status quo no longer acceptable?

McBeath: There are a couple things. There is this longer-term trends toward digitization, moving away from paper and manual processes. That's nothing new, but having said that, when we do research we always see these huge percentage of companies that are still either on paper or even more common is a mix. They have some portion of their stuff on paper and another portion that's automated. That's foundational and still in the process.

McBeath
A big part of that is getting the long tail of suppliers on board. The large suppliers have the internal resources and that they can get hooked up with these networks and systems to get automated. Smaller suppliers, we think about people that may have less than 100 people or even mid-sized suppliers, have no dedicated IT resources. They may have a very limited ability to do these things.

That's where the challenge is and that's where we see some of the innovations in helping lower the barriers for them. It's helping get a company that's trying to automate all of their invoices or other things -- that can be a mix of paper, fax, e-mail, and EDI documents -- and then gradually move that customer base over to some sort of automation, whether it's through a portal or starting to directly integrate their systems.

So that ability to get that long tail into, so that everything comes in digitally ultimately, is one of the things we're seeing.

Common denominator

Gardner: In order to get digital, as you put it, it seems like we need a common-denominator environment that all the players -- the suppliers, the buyers, the partners -- can play in. It can't be too confining, but it can be too loosey-goosey and insecure either. Have we found that balance between the right level of platform that's suitable for these processes but that doesn't stifle innovation and doesn't push people away because of rigid rules?

McBeath: I want to make a couple points on that. One is about the network approach, versus the portal approach. They are distinctive approaches. In the portal approach, each buyer will set up their own portal and that's how they'll try to get that long tail in. The problem for the suppliers is that if they have dozens or hundreds of customers, they now have dozens or hundreds of portals to deal with.

The network is supposed to solve that problem with a network of buyers and suppliers. If you have a supplier who has multiple buyers on the network, they just have to integrate once to the network. That's the theory, and it helps, but the problem there is that there are also lots of networks.

No one has cracked the nut yet, from the supplier’s point of view, on how not to deal with all these multiple technologies. There are a couple companies out there that are trying to build this supplier capability to just integrate once into one network and then it goes out and gets all the other networks. So, people are trying to solve that problem.

Gardner: And we have seen this before with Salesforce.com for example. We have an environment to develop on, trying to provide services that people would use in the customer relationship management (CRM) space, for example. We saw in June that Tradeshift has come out with an app store. Is this what you are getting at? Do you think the app store model with a development element to it is an important step in the right direction?
The salesforce.com or Tradeshift approach is different. It's not just a set of APIs to integrate to their application; it's really a full development kit, so that you can build applications on top of that.

McBeath: I mentioned there were two points. The network point was one point, and the second one is exactly what you're talking about, which is that you may have a network, but it's still constrained to just that solution provider's functionality.

The Salesforce.com or Tradeshift approach is different. It's not just a set of APIs to integrate to their application; it's really a full development kit, so that you can build applications on top of that.

There's a bit of a fuzzy line there, but there are definitely things you can point to. There are enough APIs that you can write an application from scratch. That's question number one. Does that include UI integration? That would be the second question I would ask, so that when you develop using their UI  APIs and UI guidelines, it actually looks as fully integrated as if it was one application.

There's also a philosophy point of view. More and more large-solution providers are kind of in the “light bulb is going out” [stage] and they can't necessarily build it all. Everyone has had partners. So, there's nothing new about partnering and having ISV partners and integrating, but it's a wholesale shift to building a whole toolkit, promoting it, and making it easy, and then trying to get others to build those pieces. That's a different kind of approach.

Gardner: So clearly, a critical mass is necessary to attract enough suppliers that then attracts the buyers, that then attracts more development, and so on. What's an important element to bring to that critical mass capability? I'm thinking about data analytics as one, mobile enablement, and security. What's the short list of critical factors that you think these network and platform approaches need to have in order to reach critical mass?

Critical mass

McBeath: I would separate it into technology and industry-focused things, and I'll cover the second one first. Supplier communities, especially for direct materials, tend to cluster around industries. What I see for these networks is that they can potentially meet critical mass within a specific industry by focusing on the industry. So, you get more buyers in the industry, more suppliers in the industry, and now it becomes almost the de facto way to do business within that industry.

Related to that, there are sometimes very industry-specific capabilities that are needed on the platform. It could be regulated industries like pharma or chemical that have certain things they have to do that are different from other industries. Or it could be aerospace defense, which has super-high security requirements. They may look for all of these robust identity-management capabilities.

That would be one aspect of building up a critical mass within an industry. Indirect is a little more of a horizontal play; indirect suppliers tend to go more across industries. In that case, it can be just the aggregate size of the marketplace, but it can also be the capabilities that are built in.
Some companies are trying to provide more value to suppliers, not just in terms of how they market themselves, but then also outward-facing supply-chain and logistics capabilities.

One interesting part of this is the supplier’s perspective, and for some of these networks, what they offer to suppliers is basically a platform to get noticed and to transact. But some companies are trying to provide more value to suppliers, not just in terms of how they market themselves, but then also outward-facing supply-chain and logistics capabilities. They're building rich capabilities that suppliers might actually be willing to pay for, instead of just paying for the honor of transaction on a platform.

Gardner: Suffice to say things are changing rapidly in the pay-to-procure space. What advice would you give both buyers and sellers, suppliers, when it comes to looking at the landscape and trying to make evaluations and making good decisions about being on the leading edge of disruption, taking advantage of it, rather than being perhaps injured or negatively impacted by it?

McBeath: That can be a challenging question. Eventually, the winners become quite obvious when it comes to network space, because certain networks, as I mentioned, will dominate within an industry. Then, it becomes somewhat easy decision.

Before that happens, you're trying to figure out if you're going to bet on the right horse. Part of that is looking at the kind of capabilities on the platform. One of them that's important, going back to this API extensibility thing, is that it's very difficult for one platform to do it all.

So, you'd look at whether they can do 80 percent of what you need. But then, do they also provide the tools for the other 20 percent, especially if that 20 percent, even though it may be a small amount of functionality, it may be very critical functionality for your business that you really can't live without or get high value from? If it has the ability for you to build that yourself, so that you can really get the value, that's always a good thing.

Gardner: It sounds like it would be a good idea to try a lot of things on, see what you can do in terms of that innovation at the platform level, look at the portal approach, and see what works best for you. We've heard many times that each company is, in fact, quite different, and each business grouping and ecosystem is different.

Getting the long tail

McBeath: There's a supplier perspective, and there is a buyer perspective. Besides your trading partners on the platform, from a buyer’s perspective, one of the things we talked about is getting that long tail.

Buyers should be looking at, and interested in, what level of effort it takes to onboard a new supplier, how automated can that be, and then how attractive is it to the supplier. You can ask or tell your suppliers to get on board. But if it's really hard to do, if it's expensive for them, if it takes a lot of time, then it’s going to be like pulling teeth. Whereas, if there are benefits for the suppliers, it’s easy to do, and it’s actually helping them, this becomes much easier to get that long tail of suppliers onboard.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Tradeshift.

You may also be interested in:

Thursday, July 7, 2016

How European GDPR compliance enables enterprises to both gain data privacy and improve their bottom lines

The next BriefingsDirect security market transformation discussion focuses on the implications of the European Parliament’s recent approval of the General Data Protection Regulation or GDPR.

This sweeping April 2016 law establishes a fundamental right to personal data protection for European Union (EU) citizens. It gives enterprises that hold personal data on any of these people just two years to reach privacy compliance -- or face stiff financial penalties.

But while organizations must work quickly to comply with GDPR, the strategic benefits of doing so could stretch far beyond data-privacy issues alone. Attaining a far stronger general security posture -- one that also provides a business competitive advantage -- may well be the more impactful implication.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We've assembled a panel of cybersecurity and legal experts to explore the new EU data privacy regulation and discuss ways that companies can begin to extend these needed compliance measures into essential business benefits.

Here to help us sort through the practical path of working within the requirements of a single digital market for the EU are: Tim Grieveson, Chief Cyber and Security Strategist, Enterprise Security Products EMEA, at Hewlett Packard Enterprise (HPE); David Kemp, EMEA Specialist Business Consultant at HPE, and Stewart Room, Global Head of Cybersecurity and Data Protection at PwC Legal. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, the GDPR could mean significant financial penalties in less than two years if organizations don’t protect all of their targeted data. But how can large organizations look at this under a larger umbrella, perhaps looking at this as a way of improving their own security posture?

Grieveson: It’s a great opportunity for organizations to take a step back and review the handling of personal information and security as a whole. Historically, security has been about locking things down and saying no.

Grieveson
We need to break that mold. But, this is an opportunity, because it’s pan-European, to take a step back, look at the controls that we have in place, look at the people, look at the technology holistically, and look at identifying opportunities where we can help to drive new revenues for the organization, but doing it in a safe and secure manner.

Gardner: David, is there much difference between privacy and security? If one has to comply with a regulation, doesn’t that also give them the ability to better master and control their own internal destiny when it comes to digital assets?

Kemp: Well, that’s precisely what a major European insurance company headquartered in London said to us the other day. They regard GDPR as a catalyst for their own organization to appreciate that the records management at the heart of their organization is chaotic. Furthermore, what they're looking at, hopefully with guidance from PwC Legal, is for us to provide them with an ability to enforce the policy of GDPR, but expand this out further into a major records-management facility.

Gardner: And Stewart, wouldn’t your own legal requirements for any number of reasons be bolstered by having this better management and privacy capability?
The Changing Face of Risk
Protect Your Digital Enterprise
Watch the Video to Get Started
Room: The GDPR obviously is a legal regime. So it’s going to make the legal focus much, much greater in organizations. The idea that the GDPR can be a catalyst for wider business-enabling change must be right. There are a lot of people we see on the client side who have been waiting for the big story, to get over the silos, to develop more holistic treatment for data and security. This is just going to be great -- regardless of the legal components -- for businesses that want to approach it with the right kind of mindset.

Kemp: Just to complement that is a recognition that I heard the other day, which was of a corporate client saying, "I get it. If we could install a facility that would help us with this particular regulation, to a certain extent relying once again on external counsel to assist us, we could almost feed any other regulation into the same engine."

Kemp
That is very material in term of getting sponsorship, buy in, interest from the front of the business, because this isn’t a facility just simply for this one, particular type of regulation. There’s so much more that could be engaged on.

Room: The important part, though, is that it’s a cultural shift, a mindset. It’s not a box-ticking exercise. It’s absolutely an opportunity, if you think of it in that mindset, of looking holistically. You can really maximize the opportunities that are out there.

Gardner: And because we have a global audience for our discussion, I think that this might be the point on the arrow for a much larger market than the EU. Let’s learn about what this entails, because not everyone is familiar with it yet. So in a nutshell, what does this new law require large companies to do? Tim, would you like to take that?

Protecting information

Grieveson: It’s ultimately about protecting European citizens' private and personal information. The legislation gives some guidance around how to protect data. It talks about encryption and anonymization of the information, should that inevitable breach happen, but it also talks about how to enable a quicker response for a breach.

To go back to David’s point earlier on, the key part of this is really around records management. It’s understanding what information you have where and classifying that information. What you need to do with it is key to this, ultimately because of the bad guys out there. In my world as an ex-CIO and as an ex-CISO, I was always looking to try and protect myself from the bad guys who were changing their process to monetize.

They're ultimately out to steal something, whether it be credit card information, personal information, or intellectual property (IP). Organizations often don’t understand what information they have where or who owns it, and quite often, they don’t actually value that data. So, this is a great approach to help them do that.

Gardner: And what happens if they don’t comply? This is a fairly stiff penalty.

Grieveson: It is. Up to four percent of the parent company’s annual revenue is exposed as part of a fine, but also there's a mandatory breach notification, where companies need to inform the authorities within 72 hours of a breach.
We're seeing that trend going in the wrong direction. We're seeing it getting more expensive. On average, a breach costs in excess of $7.7 million, but we are also seeing the time to remediate going up.

If we think of the Ponemon Report, the average time that the bad guy is inside an organization is 243 days, so clearly that’s going to be challenge for lots of organizations who don’t know they have been breached, but also that remediation afterwards once that inevitable breach happens, on average, globally, is anywhere from 40 to 47 days.

We're seeing that trend going in the wrong direction. We're seeing it getting more expensive. On average, a breach costs in excess of US$7.7 million, but we are also seeing the time to remediate going up.

This is what I talked about with this cultural change in thinking. We need to get much smarter about understanding the data we have and, when we have that inevitable breach, protecting the data.

Gardner: Stewart, how does this affect companies that might not just be based in the EU countries, companies that deal with any customers, or supply chain partners, alliances, the ecosystem. Give us a sense of the concentric circles of impact that this pertains to inside the EU and beyond?

Room: Yes, the law has global effect. It’s not about just regulating European activities or protecting or controlling European data. The way it works is that any entity or data controller that’s outside of Europe and that targets Europe for goods and services will be directly regulated. It doesn’t need to have an establishment, a physical presence, in Europe. It targets the goods and services. Or, if that entity pre-files and tracks the activity of European citizens on the web, they're regulated as well. So, there are entities that are physically not in Europe.

Any entity outside of Europe that receives European data or data from Europe for data processing is regulated as well. Then, any entity that’s outside of Europe that exports data into Europe is going to be regulated as well.

So it has global effect. It’s not about the physical boundaries of Europe or the presence only of data in Europe. It’s whether there is an effect on Europe or an effect on European people’s data.

Fringes of the EU

Kemp: If I could add to that, the other point is about those on the fringes of the EU, because that is where this is originating from, places such as Norway and Switzerland, and even South Africa, with the POPI legislation. These countries are not part of the EU, but as Stewart was saying, because a lot of their trade is going through the EU, they're adopting local regulation in order to mirror it in order to provide a level playing field for their corporate.

Gardner: And this notion of a fundamental right to personal data protection, is that something new? Is that a departure and does that vary greatly from country to country or region to region?

Room: This is not a new concept. The European data-protection law was first promulgated in the late 1960s. So, that’s when it was all invented. And the first European legislative instruments about data privacy were in 1973 and 1974.

Room
We've had international data-protection legislation in place since 1980, with the OECD, the Council of Europe in 1981, the Data Protection Directive of 1995. So, we're talking about stuff that is almost two generations old in terms of priority and effect.

The idea that there is a fundamental right to data protection has been articulated expressly within the EU treaties for a while now. So, it’s important that entities don’t fall into the trap of feeling that they're dealing with something new. They're actually doing something with a huge amount of history, and because it has a huge amount of history, both the problems and the solutions are well understood.

If the first time that you deal with data protection, you feel that this is new, you're probably misaligned with the sophistication of those people who would scrutinize you and be critical of you. It's been around for a long time.

Grieveson: I think it’s fair to say there is other legislation as well in certain industries that make some organizations much better prepared for dealing with what’s in the new legislation.

For example, in the finance industry, you have payment card industry (PCI) security around credit-card data. So, some companies are going to be better prepared than others, but it still gives us an opportunity as an auditor to go back and look at what you have and where it fits.

Gardner: Let’s look at this through the solution lens. One of the ways that the law apparently makes it possible for this information to leave its protected environment is if it’s properly encrypted. Is there a silver bullet here where if everything is encrypted, that solves your problem, or does that oversimplify things?

No silver bullet

Grieveson: I don’t think there is a silver bullet. Encryption is about disruption, because ultimately, as I said earlier, the bad guys are out to steal data, if I come from a cyber-attack point of view, and even the most sophisticated technologies can at some point be bypassed.

But what it does do is reduce that impact, and potentially the bad guys will go elsewhere. But remember, this isn't just about the bad guys; it’s also about people who may have done something inadvertently in releasing the data.

Encryption has a part to play, but it’s one of the components. On top of that, you have technology around having the right people and the right process, having the data-protection officer in place, and training your business users and your customers and your suppliers.

The encryption part isn't the only component, but it’s one of the tools in your kit bag to help reduce the likelihood of the data actually being commoditized and monetized.
The Changing Face of Risk
Protect Your Digital Enterprise
Watch the Video to Get Started
Gardner: And this concept of the personally identifiable information (PII), how does that play a role, and should companies that haven't been using that as an emphasis perhaps rethink the types of data and the types of identification with it?

Room: The idea of PII is known to US law. It lives inside the US legal environment, and it’s mainly constrained to a number of distinct datasets. My point is that the idea of PII is narrow.

The [EU] data-protection regime is concerned with something else, personal data. Personal data is any information relating to an identifiable living individual. When you look at how the legislation is built, it’s much, much more expansive than the idea of PII, which seems to be around name, address, Social Security number, credit-card information, things like that, into any online identifier that could be connected to an individual.

The human genome is an example of personal data. It’s important that listeners in a global sense understand the expansiveness of the idea or rather understand that the EU definition of personal data is intended to be highly, highly expansive.

Gardner: And, David Kemp, when we're thinking about where we should focus our efforts first, is this primarily about business-to-consumer (B2C) data, is it about business to business (B2B), less so or more so, or even internally for business to employee (B2E)? Is there a way for us to segment and prioritize among these groups as to what is perhaps the most in peril of being in violation of this new regulation?

Commercial view

Kemp: It’s more a commercial view rather than a legal one. The obvious example will be B2C, where you're dealing with a supermarket like Walmart in the US or Coop or Waitrose in Europe, for example. That is very clearly my personal information as I go to the supermarket.

Two weeks ago I was listening to the head of privacy at Statoil, the major Norwegian energy company, and they said we have no B2C, but in fact, even just the employee information we have is critical to us and we're taking this extremely seriously as the way in which we manage that.

Of course, that means this applies to every single corporate, that it is both an internal and an external aggregation of information.

Grieveson: The interesting thing is, as digital disruption comes to all organizations and we start to see the proliferation and the tsunami of data being gathered, it becomes more of a challenge or an opportunity, depending on how you look at that. Literally, the new [business] perimeter is on your mobile phone, on your cellphone, where people are accessing cloud services.
As digital disruption comes to all organizations and we start to see the proliferation and the tsunami of data being gathered, it becomes more of a challenge or an opportunity, depending on how you look at that.

If I use the British Airways app, for example, I'm literally accessing 18 cloud services through my mobile phone. That then, makes it a target for that data to be gathered. Do I really understand what’s being stored where? That’s where this really helps, trying to formalize what information is stored where and how it is being transacted and used.

Gardner: On another level of segmentation, is this very much different for a government, or public organization, versus a private? There might be some verticals industries like finance or health, where they've become accustomed to protecting data, but does this have implications for the public sector as well?

Room: Yes, the public sector is regulated by this. There's a separate directive that’s been adopted to cover policing and law enforcement, but the public sector has been in scope for a very long time now.

Gardner: How does one go about the solution on a bit more granular level? Someone mentioned the idea of the data-protection officer. Do we have any examples or methodologies that make for a good approach to this, both at the tactical level of compliance but also at the larger strategic level of a better total data and security posture? What do we do, what’s the idea of a data-protection officer or office, and is that a first step -- or how does one begin?

Compliance issue

Room: We're stressing to entities that data [management] view. This is a compliance issue, and there are three legs to the stool. They need to understand the economic goals that they have through the use of data or from data itself. So, economically, what are they trying to do?

The second issue is the question of risk, and where does our risk appetite lie in the context of the economic issues? And then, the third is obligation. So, compliance. It’s really important that these three things be dealt with or considered at the very beginning and at the same time.

Think about the idea simply of risk management. If we were to look at risk management in isolation of an economic goal, you could easily build a technology system that doesn’t actually deliver any gain. A good example would be personalization and customer insights. There is a huge amount of risk associated with that, and if you didn’t have the economic voice within the conversation, you could easily fail to build the right kind of insight or personalization engine. So, bringing this together is really important.

Once you've brought those things together in the conversation, the question is what is your vision, what’s your desired end-state, what is it that you're trying to achieve in light of those three things? Then, you build it out from there. What a lot of entities are doing is making tactical decisions absent the strategic decision. We know that, in a tactical sense, it’s incredibly important to do data mapping and data analysis.
Once you've brought those things together in the conversation, the question is what is your vision, what’s your desired end state, what is it that you're trying to achieve in light of those three things? Then, you build it out from there.

We feel at PwC that that’s a really critical step to take, but you want to be doing that data mapping in the context of a strategic view, because it affects the order of priority and how you tackle the work. So, some non-obvious matters will become clearer than data mapping might be if you take the proper strategic view.

A specific example of that would be complaint handling. Not many people have complaint handling on the agenda -- how we operate inside the call center, for instance. If people are cross, it's probably a much more important strategic decision in the very beginning than some of the more obvious steps that you might take. Bringing those things forward and having a desired vision for a desired end-state will tell you the steps that you want to take and mold.

Gardner: Tim, this isn’t something you buy out of a box. The security implications of being able to establish that a breach has taken place in as little 72 hours sounds to me like it involves an awful lot more than a product or service. How should one approach this from the security culture perspective, and how should one start?

Grieveson: You're absolutely right. This is not a single product or a point solution. You really have to bake it into the culture of your organization and focus not just on single solutions, but actually the end-to-end interactions between the user, the data, and the application of the data.

If you do that, what you're starting to look at is how to build things in a safe, secure manner, but also how do you build them to enable your business to do something? There's no point in building a data lake, for example, and gathering all this data unless you actually have from that data some insight, which is actionable and measured back to the business outcomes.

I actually don't use the word “security” often when I am talking to customers. I'll talk about "protection," whether that's protection of revenue or growing new markets. I put it into business language, rather than using technology language. I think it’s the first thing, because that puts people off.

What are you protecting?

The second thing is to understand what is it that you're going to protect and why, where does it reside, and then stop to build the culture from the top down and also from the bottom up. It’s not just the data protection office's problem or issue to deal with. It’s not just the CIO or the CISO, but it’s building a culture in your organization where it becomes normal everyday business. Good security is good business.

Once you've done that, this is not a project; it’s not do it once and forget it. It’s really around building a journey, but this is an evolving journey. It’s not just a matter of doing it, getting to the point where you have that check box to say, yes, you are complying. It’s absolutely around continuing to look at how you're doing your business, continuing to look at your data as new markets come on or new data comes on.

You have to reassess where you are in this structure. That’s really important, but the key thing for me is that if you focus on that data and those interactions, you have less of a conversation about the technology. The technology is an enabler, but you do need a good mix of people, process, and technology to deliver good security in a data-driven organization.
The technology is an enabler, but you do need a good mix of people, process, and technology to deliver good security in a data-driven organization.

Gardner: Given that this cuts across different groups within a large organization that may not have had very much interaction in the past -- given that this is not just technology but process and people, as Tim mentioned -- how does the relationship between HPE and PwC come together to help organization solve this? Perhaps, you can describe the alliance a bit for us.

Kemp: I'm a lawyer by profession. I very much respect our ability to collaborate with PwC, which is a global alliance [partner] of ours. On the basis of that, I regard Stewart and his very considerable department as providing a translation of the regulation into deliverables. What is it that you want me to do, what does the regulation say? It may say that you have to safeguard information. What does that entail? There are three major steps here.

One, is the external counsel guidance on what the regulation means into set of deliverables.

Secondly, a privacy audit. This has been around in terms of a cultural concept since the 1960s. Where are you already in terms of your management of PII? When that is complete, then we can introduce the technology that you might need in order to make this work. That is really where HPE comes in. That’s the sequence.

Then, if we just look very simply at the IT architecture, what’s needed? Well, as we said right at the beginning, my view is that this is under the records management coherence strategy in an organization. One of the first things is, can you connect to the sources of data around your organization, given that most entities have grown up by acquisition and not organically? Can you actually connect to and read the information where it is, wherever it is around the world, in whatever silo?

For example, Volkswagen, had a little problem in relation to diesel emissions, but one of the features there is not so much how do they defend themselves, but how do they get to the basic information in many countries as to whether a particular sales director knew about this issue or not.

Capturing data

So, connectivity is one point. The second thing is being able to capture information without moving it across borders. That's where [data] technology, which handles the metadata of the basic components of a particular piece of digital information, [applies] and can [the data] be captured, whether it is structured or unstructured. Let’s bear in mind that when we're talking about data, it could be audio or visual or alphanumeric. Can we bring that together and can we capture it?

Then, can we apply rules to it? If you had to say in a nutshell what is HPE doing as a collaboration with PwC, we're doing policy enforcement. Whatever Stewart and his professional colleagues advise in relation to the deliverables, we are seeking to affect that and make that work across the organization.

That's an easy way to describe it, even to non-technical people. So, General Counsel, Head of Compliance or Risk, they can appreciate the three steps of the legal interpretation, the privacy audit, and then the architecture. Then, second, this building up of the acquisition of information in order to be able to make sure that the standards that are set by PwC are actually being complied with.
If you had to say in a nutshell what is HPE doing as a collaboration with PwC, we're doing policy enforcement.

Gardner: We're coming up toward the end of our time, but I really wanted to get into some examples to describe what it looks like when an organization does this correctly, what the metrics of success are. How do you measure this state of compliance and attainment? Do any of you have an example of an organization that has gone through many of these paces, has acquired the right process, technology and culture, and what that looks like when you get there?

Room: There are various metrics that people have put in place, and it depends which principles you're talking about. We obviously have security, which we've spoken about quite a lot here, but there are other principles: accuracy, retention, delete, transfers, and on and on.

But one of the metrics that entities are putting in, which is non-security controlled, is about the number of people who are successfully participating in training sessions and passing the little examination at the very end. The reason that key performance indicator (KPI) is important is that during enforcement cases, when things go wrong -- and there are lots and lots of these cases out there -- the same kind of challenges are presented by the regulators and by litigants, and that's an example of one of them.

So, when you're building your metrics and your KPIs, it's important to think not just about the measures that would achieve operational privacy and operational security, but also think about the metrics that people who would be adverse to you would understand: judges, regulators, litigants, etc. There are essentially two kinds of metrics, operational results metrics, but also the judgment metrics that people may apply to you.

Gardner: At HPE, do you have any examples or perhaps you can describe why we think that doing this correctly could get you into a better competitive business position? What is it about doing this that not only allows you to be legally compliant, but also puts you in an advantageous position in a market and in terms of innovation and execution?

Biggest sanction

Kemp: If I could quote some of our clients, especially in the Nordic Region, there are about six major reasons for paying strict and urgent attention to this particular subject. One of them, listening to my clients, has to do with compliance. That is the most obvious one. That is the one that has the biggest sanction.

But there are another five arguments -- I won't go into all of them -- which have to do with advancement of the business. For example, a major media company in Finland said, if we could only be able to say on our website that we were GDPR-compliant that would increase materially the customer belief in our respect for their information, and it would give us a market advantage. So it's actually advancing the business.

The second aspect, which I anticipated, but I've also heard from corporations, is that in due course, if it's not here already, there might be a case where governments would say that if you're not GDPR compliant, then you can’t bid on our contracts.

The third might be, as Tim was referring to earlier, what if you wanted to make best use of this information? There’s even a possibility of corporations taking the PII, making sure it's fully anonymous or pseudo-anonymized, and then mixing it with other freely available information, such as Facebook, and actually saying to a customer, David, we would like to use your PII, fully anonymized. We can prove to you that we have followed the PwC legal guidance. And furthermore, if we do use this information and use it for analytics, we might even want to pay you for this. What are you doing? You are increasing the bonding and loyalty with your customers.
In due course, if it's not here already, there might be a case where governments would say that if you're not GDPR compliant, then you can’t bid on our contracts.

So, we should think about the upsides of the business advancement, which ironically is coming out of a regulation, which may not be so obvious.

Gardner: Let’s close out with some practical hints as to how to get started, where to find more resources, both on the GDPR, but also how to attain a better data privacy capability. Any thoughts about where we go to begin the process?

Kemp: I would say that in the public domain, the EU is extremely good at promulgating information about the regulation itself coming in and providing some basic interpretation. But then, I would hand it on to Stewart in terms of what PwC Legal is already providing in the public domain.

Room: We have two accelerators that we've built to help entities go forward. The first is our GDPR Readiness Assessment Tool (RAT), and lots of multinationals run the RAT at the very beginning of their GDPR programs.
The Changing Face of Risk
Protect Your Digital Enterprise
Watch the Video to Get Started
What does it do? It asks 70 key questions against the two domains of operation and legal privacy. Privacy architecture and privacy principles are mapped into a maturity metric that assesses people’s confidence about where they stand. All of that is then mapped into the articles and recitals of the GDPR. Lots of our clients use the RAT.

The second accelerator is the PwC Privacy and Security Enforcement Tracker. We've been tracking the results of regulatory cases and litigation in this area over many years. That gives us a very granular insight into the real priorities of regulators and litigants in general.

Using those two tools at the very beginning gives you a good insight into where you are and what your risk priorities are.

Gardner: Last word to you, Tim. Any thoughts on getting started -- resources, places to go to get on your journey or further along?

The whole organization

Grieveson: You need to involve the whole organization. As I said earlier on, it’s not just about passing it over to the data-protection officer. You need to have the buy-in from every part of the organization. Clearly, working with organizations who understand the GDPR and the legal implications, such as the collaboration between PwC and HPE, is where I would go.

When I was in the seat as a CISO, I'm not a legal expert, so one of the first things that I did was go and get that expertise and brought it in. Probably the first place I would start is getting buy-in from the business and making sure that you have the right people around the table to help you on the journey.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Monday, June 27, 2016

CA streamlines cloud and hybrid IT infrastructure adoption through better holistic services monitoring

New capabilities in CA Unified Infrastructure Management (CA UIM) are designed to help enterprises adopt cloud more rapidly and better manage hybrid IT infrastructure heterogeneity across several major cloud environments.

Enterprises and SMBs are now clamoring for hybrid cloud benefits, due to an ability for focus on on apps and to gain speed for new business initiatives, says Stephen Orban, Global Head of Enterprise Strategy at AWS.

"Going cloud-first allows organizations to focus on the apps that make the business run, says Orban. Using hybrid computing, the burden of proof soon shifts to why should we use cloud for more of IT," he says.

As has been the case with legacy IT for decades, the better the overall management, the better the adoptions success, productivity, and return on investment (ROI) for IT systems and the apps they support -- no matter their location of IT architecture. This same truth is now being applied to solve the cloud heterogeneity problem, just as it did the legacy platforms heterogeneity problem. The total visibility solution may be even more powerful in this new architectural era.

Cloud-fist is business-first

The stakes are now even higher. As you migrate to the cloud, one weak link in a complex hybrid cloud deployment can ruin the end-user experience, says Ali Siddiqui, general manager, Agile Operations at CA, "By providing insight across the performance of all of an organization's IT resources in a single and unified view, CA UIM gives users the power to choose the right mix of modern cloud enablement technologies."
UIM gives users the power to choose the right mix of modern cloud enablement technologies that can best support new endeavors that can contribute to business growth.

CA UIM reduces complexity of hybrid infrastructures by providing visibility across on-premises, private-, and public-cloud infrastructures through a single console UI. Such insight enables users to adopt new technologies and expand monitoring configurations across existing and new IT resource elements. CA expects the solution to reduce the need for multiple monitoring tools. [Disclosure: CA is a sponsor of BriefingsDirect.]

"Keep your life simple from a monitoring and management perspective, regardless of your hybrid cloud [topology]," said Michael Morris, Senior Director Product Management, at CA Technologies in a recent webcast.

To grease the skids to hybrid cloud adoption, CA UIM now supports advanced performance monitoring of Docker containers, PureStorage arrays, Nutanix, hyperconverged systems, OpenStack cloud environments, and additional capabilities for Amazon Web Services (AWS) cloud infrastructures, CA Technologies announced last week.

CA is putting its IT systems management muscle behind the problem of migrating from data centers to the cloud, and then better supporting hybrid models, says Siddiqui. The "single pane of glass" monitoring approach that CA is delivering allows measurement and enforcement of service-level agreements (SLAs) before and after cloud migration. This way, continuity of service and IT value-add can be preserved and measured, he added.

Managing a cloud ecosystem

"Using advanced monitoring and management can significantly cut costs of moving to cloud," says Siddiqui.

Indeed, CA is working with several prominent cloud and IT infrastructure partners to make the growing diversity of cloud implementations a positive, not a drawback. For example, "Virtualization tools are too constrained to specific hypervisors, so you need total cloud visibility," says Steve Kaplan, Vice President of Client Strategy at Nutanix, of CA's new offerings.

And it's not all performance monitoring. Enhancements to CA UIM's coverage of AWS cloud infrastructures include billing metrics and support for additional services that provide deeper actionable insights on cloud brokering.

CA UIM now also provides:

  • Service-centric and unified analytics capabilities that rapidly identify the root cause of performance issues, resulting in a faster time to repair and better end-user experience
  • Out-of-the-box support for more than 140 on-premises and cloud technologies

  • Templates for easier configuring of monitors than can be applied to groups of disparate systems
What's more, to ensure the reliability of networks such as SDN/NFV that connect and scale hybrid environments, CA has also delivered CA Virtual Network Assurance, which provides a common view of dynamic changes across virtual and physical network stacks.

You may also be interested in:

Friday, June 24, 2016

Here's how two part-time DBAs maintain mobile app ad platform Tapjoy’s massive data needs

The next BriefingsDirect Voice of the Customer big data case study discussion examines how mobile app advertising platform Tapjoy handles fast and massive data -- some two dozen terabytes per day -- with just two part-time database administrators (DBAs).

Examine how Tapjoy’s data-driven business of serving 500 million global mobile users -- or more than 1.5 million add engagements per day, a data volume of a 120 terabytes -- runs with extreme efficiency.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about how high scale and complexity meets minimal labor for building user and advertiser loyalty we're joined by David Abercrombie, Principal Data Analytics Engineer at Tapjoy in San Francisco. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mobile advertising has really been a major growth area, perhaps more than any other type of advertising. We hear a lot about advertising waning, but not mobile app advertising. How does Tapjoy and its platform help contribute to the success of what we're seeing in the mobile app ad space?

Abercrombie: The key to Tapjoy’s success is engaging the users and rewarding them for engaging with an ad. Our advertising model is you engage with an ad and then you get typically some sort of reward: A virtual currency in the game you're playing or some sort of discount.

Abercrombie
We actually have the kind of ads that lead users to seek us out to engage with the ads and get their rewards.

Gardner: So this is quite a bit different than a static presented ad. This is something that has a two-way street, maybe multiple directions of information coming and going. Why the analysis? Why is that so important? And why the speed of analysis?

Abercrombie: We have basically three types of customers. We have the app publishers who want to monetize and get money from displaying ads. We have the advertisers who need to get their message out and pay for that. Then, of course, we have the users who want to engage with the ads and get their rewards.

The key to Tapjoy’s success is being able to balance the needs of all of these disparate uses. We can’t charge the advertisers too much for their ads, even though the monetizers would like that. It’s a delicate balancing act, and that can only be done through big-data analysis, careful optimization, and careful monitoring of the ad network assets and operation.

Gardner: Before we learn more about the analytics, tell us a bit more about what role Tapjoy plays specifically in what looks like an ecosystem play for placing, evaluating, and monetizing app ads? What is it specifically that you do in this bigger app ad function?

Ad engagement model

Abercrombie: Specifically what Tapjoy does is enable this rewarded ad engagement model, so that the advertisers know that people are going to be paying attention to their ads and so that the publishers know that the ads we're displaying are compatible with their app and are not going to produce a jarring experience. We want everybody to be happy -- the publishers, the advertisers, and the users. That’s a delicate compromise that’s Tapjoy’s strength.

Gardner: And when you get an end user to do something, to take an action, that’s very powerful, not only because you're getting them to do what you wanted, but you can evaluate what they did under what circumstances and so forth. Tell us about the model of the end user specifically. What is it about engaging with them that leads to the data -- which we will get to in a moment?
HPE Vertica
Community Edition
Start Your Free Trial Now
Abercrombie: In our model of the user, we talk about long-term value. So even though it may be a new user who has just started with us, maybe their first engagement, we like to look at them in terms of their long-term value, both to the publishers and the advertiser.

We don’t want people who are just engaging with the ad and going away, getting what they want and not really caring about it. Rather, we want good users who will continue their engagement and continue this process. Once again, that takes some fairly sophisticated machine-learning algorithms and very powerful inferences to be able to assess the long-term value.

As an example, we have our publishers who are also advertisers. They're advertising their app within our platform and for them the conversion event, what they are looking for, is a download. What we're trying to do is to offer them users who will not only download the game once to get that initial payoff reward, but will value the download and continue to use it again and again.
The people who are advertising don’t want people to just see their ads. They want people to follow up with whatever it is they're advertising.

So all of our models are designed with that end in mind -- to look at the long-term value of the user, not just the immediate conversion at this instant in time.

Gardner: So perhaps it’s a bit of a misnomer to talk about ads in apps. We're really talking about a value-add function in the app itself.

Abercrombie: Right. The people who are advertising don’t want people to just see their ads. They want people to follow up with whatever it is they're advertising. If it’s another app, they want good users for whom that app is relevant and useful.

That’s really the way we look at it. That’s the way to enhance the overall experience in the long-term. We're not just in it for the short-term. We're looking at developing a good solid user base, a good set of users who engage thoroughly.

Gardner: And as I said in my set-up, there's nothing hotter in all of advertising than mobile apps and how to do this right. It’s early innings, but clearly the stakes are very high.

A tough business

Abercrombie: And it’s a tough business. People are saturated. Many people don’t want ads. Some of the business models are difficult to master.

For instance, there may be a sequence of multiple ad units. There may be a video followed by another ad to download something. It becomes a very tricky thing to balance the financing here. If it was just a simple pass-through and we take a cut, that would be trivial, but that doesn't work in today's market. There are more sophisticated approaches, which do involve business risk.

If we reward the user, based on the fact that they're watching the video, but then they don't download the app, then we don't get money. So we have to look very carefully at the complexity of the whole interaction to make it as smooth and rewarding as possible, so that the thing works. That's difficult to do.

Gardner: So we're in a dynamic, fast-growing, fairly fresh, new industry. Knowing what's going to happen before it happens is always fun in almost any industry, but in this case, it seems with those high stakes and to make that monetization happen, it’s particularly important.
HPE Vertica
Community Edition
Start Your Free Trial Now
Tell me now about gathering such large amounts of data, being able to work with it, and then allowing analysis to happen very swiftly. How do you go about making that possible?

Abercrombie: Our data architecture is relatively standard for this type of clickstream operation. There is some data that can be put directly into a transactional database in real time, but typically, that's only when you get to the very bottom of the funnel, the conversion stuff. But all that clickstream stuff gets written, has JSON formatted log files, gets swept up by a queuing system, and then put into our data systems.

Our legacy system involved a homegrown queuing system, dumping data into HDFS. From there, we would extract and load CSVs into Vertica. As with so many other organizations, we're moving to more real-time operations. Our queuing system has evolved from a couple of different homegrown applications, and now we're implementing Apache Kafka.

We use Spark as part of our infrastructure, as sort of a hub, if you will, where data is farmed out to other systems, including a real-time, in-memory SQL database, which is fairly new to us this year. Then, we're still putting data in HDFS, and that's where the machine learning occurs. From there, we're bringing it into Vertica.

In Vertica -- and our Vertica cluster has two main purposes -- there is the operational data store, which has the raw, flat tables that are one row for every event, with the millisecond timestamps and the IDs of all the different entities involved.

From that operational data store, we do a pure SQL ETL extract into kind of an old-school star schema within Vertica, the same database.

Pure SQL

So our business intelligence (BI) ETL is pure SQL and goes into a full-fledged snowflake schema, moderately denormalized with all the old-school bells and whistles, the type 1, type 2, slowly changing dimensions. With Vertica, we're able to denormalize that data warehouse to a large degree.

Sitting on top of that we have a BI tool. We use MicroStrategy, for which we have defined our various metrics and our various attributes, and it’s very adept at knowing exactly which fact table and which dimensions to join.

So we have sort of a hybrid architecture. I'd say that we have all the way from real-time, in-memory SQL, Hadoop and all of its machine learning and our algorithmic pipelines, and then we have kind of the old-school data warehouse with the operational data store and the star schema.

Gardner: So a complex, innovative, custom architectural approach to this and yet I'm astonished that you are running and using Vertica in multiple ways with two part-time DBAs. How is it possible that you have minimal labor, given this topology that you just described?

Abercrombie: Well, we found Vertica very easy to manage. It has been very well-behaved, very stable.
In terms of ad-hoc users of our Vertica database, we have well over 100 people who have the ability to run any query they want at any time into the Vertica database.

For instance, we don’t even really use the Management Console, because there is not enough to manage. Our cluster is about 120 terabytes. It’s only on eight nodes and it’s pretty much trouble free.

One of the part-times DBAs deals with kind of more operating-system level stuff --  patches, cluster recovery, those sorts of issues. And the other part-time DBA is me. I deal more with data structure design, SQL tuning and Vertica training for our staff.

In terms of ad-hoc users of our Vertica database, we have well over 100 people who have the ability to run any query they want at any time into the Vertica database.

When we first started out, we tried running Vertica in Amazon EC2. Mind you, this was four or five years ago. Amazon EC2 was not where it is today. It failed. It was very difficult to manage. There were perplexing problems that we couldn’t solve. So we moved our Vertica and essentially all of our big-data data systems out of the cloud onto dedicated hardware, where they are much easier to manage and much easier to bring the proper resources.

Then, at one time in our history, when we built a dedicated hardware cluster for Vertica, we failed to heed properly the hardware planning guide and did not provision enough disk I/O bandwidth. In those situations, Vertica is unstable, and we had a lot of problems.

But once we got the proper disk I/O, it has been smooth sailing. I can’t even remember the last time we even had a node drop out. It has been rock solid. I was able to go on a vacation for three weeks recently and know that there would be no problem, and there was no problem.

Gardner: The ultimate key performance indicator (KPI), "I was able to go on vacation."

Fairly resilient

Abercrombie: Exactly. And with the proper hardware design, HPE Vertica is fairly resilient against out-of-control queries. There was a time when half my time was spent monitoring for slow queries, but again, with the proper hardware, it's smooth sailing. I don’t even bother with that stuff anymore.

Our MicroStrategy BI tool writes very good SQL. Part of the key to our success with this BI portion is designing the Vertica schema and the MicroStrategy metadata layer to take advantage of each other’s strengths and avoid each other’s weaknesses. So that really was key to the stable, exceptional performance we get. I basically get no complaints of slow queries from my BI tool. No problem.

Gardner: The right kind of problem to have.

Abercrombie: Yes.

Gardner: Okay, now that we have heard quite a bit about how you are doing this, I'd like to learn, if I could, about some of the paybacks when you do this properly, when it is running well, in terms of SQL queries, ETL load times reduction, the ability for you to monetize and help your customers create better advertising programs that are acceptable and popular. What are the paybacks technically and then in business terms?
The only way to get that confidence was by having highly accurate data and extensive quality control (QC) in the ETL.

Abercrombie: In order to get those paybacks, a key element was confidence in the data, the results that we were shipping out. The only way to get that confidence was by having highly accurate data and extensive quality control (QC) in the ETL.

What that also means is that as a product is under development and when it’s not ready yet, the instrumentation isn’t ready, that stuff doesn’t make it into our BI tool. You can only get that stuff from ad hoc.

So the benefit has been a very clear understanding of the day-to-day operations of our ad network, both for our internal monitoring to know when things are behaving properly, when the instrumentation is working as expected, and when the queues are running, but also for our customers.

Because of the flexibility that we can do from a traditional BI system with 500 metrics, over a couple of dozen dimensions, our customers, the publishers and the advertisers, get incredible detail, customized exactly the way they need for ingestion into their systems or to help them understand how Tapjoy is serving them. Again, that comes from confidence in the data.

Gardner: When you have more data and better analytics, you can create better products. Where might we look next to where you take this? I don’t expect you to pre-announce anything, but where can you now take these capabilities as a business and maybe even expand into other activities on a mobile endpoint?

Flexibility in algorithms

Abercrombie: As we expand our business and move into new areas, what we really need is flexibility in our algorithms and the way we deal with some of our real-time decision making.

So one area that’s new to us this year is the in-memory SQL database like MemSQL. Some of our old real-time ad optimization was based on pre-calculating data and serving it up through HBase KeyValue, but now, where we can do real-time aggregation queries using SQL, that is easy to understand, easy to modify, very expressive and very transparent. It gives us more flexibility in terms of fine-tuning our real-time decision-making algorithms, which is absolutely necessary.

As an example, we acquired a company in Korea called 5Rocks that does app tech and that tracks the users within the app, like what level they're on, or what activities they're doing and what they enjoy, with an eye towards in-app purchase optimization.
HPE Vertica
Community Edition
Start Your Free Trial Now
And so we're blending the in-app purchase optimization along with traditional ad network optimization, and the two have different rules and different constraints. So we really need the flexibility and expressiveness of our real-time decision making systems.

Gardner: One last question. You mentioned machine learning earlier. Do you see that becoming more prominent in what you do and how you're working with data scientists, and how might that expand in terms of where you employ it?

Abercrombie: Tapjoy started with machine learning. Our data scientists are machine learning. Our productive algorithm team is about six times larger than our traditional Vertica BI team. Mostly what we do at Tapjoy is predictive analytics and various machine-learning things. So we wouldn't be alive without it. And we expanded. We're not shifting in one direction or another. It's apples and oranges, and there's a place for both.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: