Thursday, July 7, 2016

How European GDPR compliance enables enterprises to both gain data privacy and improve their bottom lines

The next BriefingsDirect security market transformation discussion focuses on the implications of the European Parliament’s recent approval of the General Data Protection Regulation or GDPR.

This sweeping April 2016 law establishes a fundamental right to personal data protection for European Union (EU) citizens. It gives enterprises that hold personal data on any of these people just two years to reach privacy compliance -- or face stiff financial penalties.

But while organizations must work quickly to comply with GDPR, the strategic benefits of doing so could stretch far beyond data-privacy issues alone. Attaining a far stronger general security posture -- one that also provides a business competitive advantage -- may well be the more impactful implication.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We've assembled a panel of cybersecurity and legal experts to explore the new EU data privacy regulation and discuss ways that companies can begin to extend these needed compliance measures into essential business benefits.

Here to help us sort through the practical path of working within the requirements of a single digital market for the EU are: Tim Grieveson, Chief Cyber and Security Strategist, Enterprise Security Products EMEA, at Hewlett Packard Enterprise (HPE); David Kemp, EMEA Specialist Business Consultant at HPE, and Stewart Room, Global Head of Cybersecurity and Data Protection at PwC Legal. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, the GDPR could mean significant financial penalties in less than two years if organizations don’t protect all of their targeted data. But how can large organizations look at this under a larger umbrella, perhaps looking at this as a way of improving their own security posture?

Grieveson: It’s a great opportunity for organizations to take a step back and review the handling of personal information and security as a whole. Historically, security has been about locking things down and saying no.

Grieveson
We need to break that mold. But, this is an opportunity, because it’s pan-European, to take a step back, look at the controls that we have in place, look at the people, look at the technology holistically, and look at identifying opportunities where we can help to drive new revenues for the organization, but doing it in a safe and secure manner.

Gardner: David, is there much difference between privacy and security? If one has to comply with a regulation, doesn’t that also give them the ability to better master and control their own internal destiny when it comes to digital assets?

Kemp: Well, that’s precisely what a major European insurance company headquartered in London said to us the other day. They regard GDPR as a catalyst for their own organization to appreciate that the records management at the heart of their organization is chaotic. Furthermore, what they're looking at, hopefully with guidance from PwC Legal, is for us to provide them with an ability to enforce the policy of GDPR, but expand this out further into a major records-management facility.

Gardner: And Stewart, wouldn’t your own legal requirements for any number of reasons be bolstered by having this better management and privacy capability?
The Changing Face of Risk
Protect Your Digital Enterprise
Watch the Video to Get Started
Room: The GDPR obviously is a legal regime. So it’s going to make the legal focus much, much greater in organizations. The idea that the GDPR can be a catalyst for wider business-enabling change must be right. There are a lot of people we see on the client side who have been waiting for the big story, to get over the silos, to develop more holistic treatment for data and security. This is just going to be great -- regardless of the legal components -- for businesses that want to approach it with the right kind of mindset.

Kemp: Just to complement that is a recognition that I heard the other day, which was of a corporate client saying, "I get it. If we could install a facility that would help us with this particular regulation, to a certain extent relying once again on external counsel to assist us, we could almost feed any other regulation into the same engine."

Kemp
That is very material in term of getting sponsorship, buy in, interest from the front of the business, because this isn’t a facility just simply for this one, particular type of regulation. There’s so much more that could be engaged on.

Room: The important part, though, is that it’s a cultural shift, a mindset. It’s not a box-ticking exercise. It’s absolutely an opportunity, if you think of it in that mindset, of looking holistically. You can really maximize the opportunities that are out there.

Gardner: And because we have a global audience for our discussion, I think that this might be the point on the arrow for a much larger market than the EU. Let’s learn about what this entails, because not everyone is familiar with it yet. So in a nutshell, what does this new law require large companies to do? Tim, would you like to take that?

Protecting information

Grieveson: It’s ultimately about protecting European citizens' private and personal information. The legislation gives some guidance around how to protect data. It talks about encryption and anonymization of the information, should that inevitable breach happen, but it also talks about how to enable a quicker response for a breach.

To go back to David’s point earlier on, the key part of this is really around records management. It’s understanding what information you have where and classifying that information. What you need to do with it is key to this, ultimately because of the bad guys out there. In my world as an ex-CIO and as an ex-CISO, I was always looking to try and protect myself from the bad guys who were changing their process to monetize.

They're ultimately out to steal something, whether it be credit card information, personal information, or intellectual property (IP). Organizations often don’t understand what information they have where or who owns it, and quite often, they don’t actually value that data. So, this is a great approach to help them do that.

Gardner: And what happens if they don’t comply? This is a fairly stiff penalty.

Grieveson: It is. Up to four percent of the parent company’s annual revenue is exposed as part of a fine, but also there's a mandatory breach notification, where companies need to inform the authorities within 72 hours of a breach.
We're seeing that trend going in the wrong direction. We're seeing it getting more expensive. On average, a breach costs in excess of $7.7 million, but we are also seeing the time to remediate going up.

If we think of the Ponemon Report, the average time that the bad guy is inside an organization is 243 days, so clearly that’s going to be challenge for lots of organizations who don’t know they have been breached, but also that remediation afterwards once that inevitable breach happens, on average, globally, is anywhere from 40 to 47 days.

We're seeing that trend going in the wrong direction. We're seeing it getting more expensive. On average, a breach costs in excess of US$7.7 million, but we are also seeing the time to remediate going up.

This is what I talked about with this cultural change in thinking. We need to get much smarter about understanding the data we have and, when we have that inevitable breach, protecting the data.

Gardner: Stewart, how does this affect companies that might not just be based in the EU countries, companies that deal with any customers, or supply chain partners, alliances, the ecosystem. Give us a sense of the concentric circles of impact that this pertains to inside the EU and beyond?

Room: Yes, the law has global effect. It’s not about just regulating European activities or protecting or controlling European data. The way it works is that any entity or data controller that’s outside of Europe and that targets Europe for goods and services will be directly regulated. It doesn’t need to have an establishment, a physical presence, in Europe. It targets the goods and services. Or, if that entity pre-files and tracks the activity of European citizens on the web, they're regulated as well. So, there are entities that are physically not in Europe.

Any entity outside of Europe that receives European data or data from Europe for data processing is regulated as well. Then, any entity that’s outside of Europe that exports data into Europe is going to be regulated as well.

So it has global effect. It’s not about the physical boundaries of Europe or the presence only of data in Europe. It’s whether there is an effect on Europe or an effect on European people’s data.

Fringes of the EU

Kemp: If I could add to that, the other point is about those on the fringes of the EU, because that is where this is originating from, places such as Norway and Switzerland, and even South Africa, with the POPI legislation. These countries are not part of the EU, but as Stewart was saying, because a lot of their trade is going through the EU, they're adopting local regulation in order to mirror it in order to provide a level playing field for their corporate.

Gardner: And this notion of a fundamental right to personal data protection, is that something new? Is that a departure and does that vary greatly from country to country or region to region?

Room: This is not a new concept. The European data-protection law was first promulgated in the late 1960s. So, that’s when it was all invented. And the first European legislative instruments about data privacy were in 1973 and 1974.

Room
We've had international data-protection legislation in place since 1980, with the OECD, the Council of Europe in 1981, the Data Protection Directive of 1995. So, we're talking about stuff that is almost two generations old in terms of priority and effect.

The idea that there is a fundamental right to data protection has been articulated expressly within the EU treaties for a while now. So, it’s important that entities don’t fall into the trap of feeling that they're dealing with something new. They're actually doing something with a huge amount of history, and because it has a huge amount of history, both the problems and the solutions are well understood.

If the first time that you deal with data protection, you feel that this is new, you're probably misaligned with the sophistication of those people who would scrutinize you and be critical of you. It's been around for a long time.

Grieveson: I think it’s fair to say there is other legislation as well in certain industries that make some organizations much better prepared for dealing with what’s in the new legislation.

For example, in the finance industry, you have payment card industry (PCI) security around credit-card data. So, some companies are going to be better prepared than others, but it still gives us an opportunity as an auditor to go back and look at what you have and where it fits.

Gardner: Let’s look at this through the solution lens. One of the ways that the law apparently makes it possible for this information to leave its protected environment is if it’s properly encrypted. Is there a silver bullet here where if everything is encrypted, that solves your problem, or does that oversimplify things?

No silver bullet

Grieveson: I don’t think there is a silver bullet. Encryption is about disruption, because ultimately, as I said earlier, the bad guys are out to steal data, if I come from a cyber-attack point of view, and even the most sophisticated technologies can at some point be bypassed.

But what it does do is reduce that impact, and potentially the bad guys will go elsewhere. But remember, this isn't just about the bad guys; it’s also about people who may have done something inadvertently in releasing the data.

Encryption has a part to play, but it’s one of the components. On top of that, you have technology around having the right people and the right process, having the data-protection officer in place, and training your business users and your customers and your suppliers.

The encryption part isn't the only component, but it’s one of the tools in your kit bag to help reduce the likelihood of the data actually being commoditized and monetized.
The Changing Face of Risk
Protect Your Digital Enterprise
Watch the Video to Get Started
Gardner: And this concept of the personally identifiable information (PII), how does that play a role, and should companies that haven't been using that as an emphasis perhaps rethink the types of data and the types of identification with it?

Room: The idea of PII is known to US law. It lives inside the US legal environment, and it’s mainly constrained to a number of distinct datasets. My point is that the idea of PII is narrow.

The [EU] data-protection regime is concerned with something else, personal data. Personal data is any information relating to an identifiable living individual. When you look at how the legislation is built, it’s much, much more expansive than the idea of PII, which seems to be around name, address, Social Security number, credit-card information, things like that, into any online identifier that could be connected to an individual.

The human genome is an example of personal data. It’s important that listeners in a global sense understand the expansiveness of the idea or rather understand that the EU definition of personal data is intended to be highly, highly expansive.

Gardner: And, David Kemp, when we're thinking about where we should focus our efforts first, is this primarily about business-to-consumer (B2C) data, is it about business to business (B2B), less so or more so, or even internally for business to employee (B2E)? Is there a way for us to segment and prioritize among these groups as to what is perhaps the most in peril of being in violation of this new regulation?

Commercial view

Kemp: It’s more a commercial view rather than a legal one. The obvious example will be B2C, where you're dealing with a supermarket like Walmart in the US or Coop or Waitrose in Europe, for example. That is very clearly my personal information as I go to the supermarket.

Two weeks ago I was listening to the head of privacy at Statoil, the major Norwegian energy company, and they said we have no B2C, but in fact, even just the employee information we have is critical to us and we're taking this extremely seriously as the way in which we manage that.

Of course, that means this applies to every single corporate, that it is both an internal and an external aggregation of information.

Grieveson: The interesting thing is, as digital disruption comes to all organizations and we start to see the proliferation and the tsunami of data being gathered, it becomes more of a challenge or an opportunity, depending on how you look at that. Literally, the new [business] perimeter is on your mobile phone, on your cellphone, where people are accessing cloud services.
As digital disruption comes to all organizations and we start to see the proliferation and the tsunami of data being gathered, it becomes more of a challenge or an opportunity, depending on how you look at that.

If I use the British Airways app, for example, I'm literally accessing 18 cloud services through my mobile phone. That then, makes it a target for that data to be gathered. Do I really understand what’s being stored where? That’s where this really helps, trying to formalize what information is stored where and how it is being transacted and used.

Gardner: On another level of segmentation, is this very much different for a government, or public organization, versus a private? There might be some verticals industries like finance or health, where they've become accustomed to protecting data, but does this have implications for the public sector as well?

Room: Yes, the public sector is regulated by this. There's a separate directive that’s been adopted to cover policing and law enforcement, but the public sector has been in scope for a very long time now.

Gardner: How does one go about the solution on a bit more granular level? Someone mentioned the idea of the data-protection officer. Do we have any examples or methodologies that make for a good approach to this, both at the tactical level of compliance but also at the larger strategic level of a better total data and security posture? What do we do, what’s the idea of a data-protection officer or office, and is that a first step -- or how does one begin?

Compliance issue

Room: We're stressing to entities that data [management] view. This is a compliance issue, and there are three legs to the stool. They need to understand the economic goals that they have through the use of data or from data itself. So, economically, what are they trying to do?

The second issue is the question of risk, and where does our risk appetite lie in the context of the economic issues? And then, the third is obligation. So, compliance. It’s really important that these three things be dealt with or considered at the very beginning and at the same time.

Think about the idea simply of risk management. If we were to look at risk management in isolation of an economic goal, you could easily build a technology system that doesn’t actually deliver any gain. A good example would be personalization and customer insights. There is a huge amount of risk associated with that, and if you didn’t have the economic voice within the conversation, you could easily fail to build the right kind of insight or personalization engine. So, bringing this together is really important.

Once you've brought those things together in the conversation, the question is what is your vision, what’s your desired end-state, what is it that you're trying to achieve in light of those three things? Then, you build it out from there. What a lot of entities are doing is making tactical decisions absent the strategic decision. We know that, in a tactical sense, it’s incredibly important to do data mapping and data analysis.
Once you've brought those things together in the conversation, the question is what is your vision, what’s your desired end state, what is it that you're trying to achieve in light of those three things? Then, you build it out from there.

We feel at PwC that that’s a really critical step to take, but you want to be doing that data mapping in the context of a strategic view, because it affects the order of priority and how you tackle the work. So, some non-obvious matters will become clearer than data mapping might be if you take the proper strategic view.

A specific example of that would be complaint handling. Not many people have complaint handling on the agenda -- how we operate inside the call center, for instance. If people are cross, it's probably a much more important strategic decision in the very beginning than some of the more obvious steps that you might take. Bringing those things forward and having a desired vision for a desired end-state will tell you the steps that you want to take and mold.

Gardner: Tim, this isn’t something you buy out of a box. The security implications of being able to establish that a breach has taken place in as little 72 hours sounds to me like it involves an awful lot more than a product or service. How should one approach this from the security culture perspective, and how should one start?

Grieveson: You're absolutely right. This is not a single product or a point solution. You really have to bake it into the culture of your organization and focus not just on single solutions, but actually the end-to-end interactions between the user, the data, and the application of the data.

If you do that, what you're starting to look at is how to build things in a safe, secure manner, but also how do you build them to enable your business to do something? There's no point in building a data lake, for example, and gathering all this data unless you actually have from that data some insight, which is actionable and measured back to the business outcomes.

I actually don't use the word “security” often when I am talking to customers. I'll talk about "protection," whether that's protection of revenue or growing new markets. I put it into business language, rather than using technology language. I think it’s the first thing, because that puts people off.

What are you protecting?

The second thing is to understand what is it that you're going to protect and why, where does it reside, and then stop to build the culture from the top down and also from the bottom up. It’s not just the data protection office's problem or issue to deal with. It’s not just the CIO or the CISO, but it’s building a culture in your organization where it becomes normal everyday business. Good security is good business.

Once you've done that, this is not a project; it’s not do it once and forget it. It’s really around building a journey, but this is an evolving journey. It’s not just a matter of doing it, getting to the point where you have that check box to say, yes, you are complying. It’s absolutely around continuing to look at how you're doing your business, continuing to look at your data as new markets come on or new data comes on.

You have to reassess where you are in this structure. That’s really important, but the key thing for me is that if you focus on that data and those interactions, you have less of a conversation about the technology. The technology is an enabler, but you do need a good mix of people, process, and technology to deliver good security in a data-driven organization.
The technology is an enabler, but you do need a good mix of people, process, and technology to deliver good security in a data-driven organization.

Gardner: Given that this cuts across different groups within a large organization that may not have had very much interaction in the past -- given that this is not just technology but process and people, as Tim mentioned -- how does the relationship between HPE and PwC come together to help organization solve this? Perhaps, you can describe the alliance a bit for us.

Kemp: I'm a lawyer by profession. I very much respect our ability to collaborate with PwC, which is a global alliance [partner] of ours. On the basis of that, I regard Stewart and his very considerable department as providing a translation of the regulation into deliverables. What is it that you want me to do, what does the regulation say? It may say that you have to safeguard information. What does that entail? There are three major steps here.

One, is the external counsel guidance on what the regulation means into set of deliverables.

Secondly, a privacy audit. This has been around in terms of a cultural concept since the 1960s. Where are you already in terms of your management of PII? When that is complete, then we can introduce the technology that you might need in order to make this work. That is really where HPE comes in. That’s the sequence.

Then, if we just look very simply at the IT architecture, what’s needed? Well, as we said right at the beginning, my view is that this is under the records management coherence strategy in an organization. One of the first things is, can you connect to the sources of data around your organization, given that most entities have grown up by acquisition and not organically? Can you actually connect to and read the information where it is, wherever it is around the world, in whatever silo?

For example, Volkswagen, had a little problem in relation to diesel emissions, but one of the features there is not so much how do they defend themselves, but how do they get to the basic information in many countries as to whether a particular sales director knew about this issue or not.

Capturing data

So, connectivity is one point. The second thing is being able to capture information without moving it across borders. That's where [data] technology, which handles the metadata of the basic components of a particular piece of digital information, [applies] and can [the data] be captured, whether it is structured or unstructured. Let’s bear in mind that when we're talking about data, it could be audio or visual or alphanumeric. Can we bring that together and can we capture it?

Then, can we apply rules to it? If you had to say in a nutshell what is HPE doing as a collaboration with PwC, we're doing policy enforcement. Whatever Stewart and his professional colleagues advise in relation to the deliverables, we are seeking to affect that and make that work across the organization.

That's an easy way to describe it, even to non-technical people. So, General Counsel, Head of Compliance or Risk, they can appreciate the three steps of the legal interpretation, the privacy audit, and then the architecture. Then, second, this building up of the acquisition of information in order to be able to make sure that the standards that are set by PwC are actually being complied with.
If you had to say in a nutshell what is HPE doing as a collaboration with PwC, we're doing policy enforcement.

Gardner: We're coming up toward the end of our time, but I really wanted to get into some examples to describe what it looks like when an organization does this correctly, what the metrics of success are. How do you measure this state of compliance and attainment? Do any of you have an example of an organization that has gone through many of these paces, has acquired the right process, technology and culture, and what that looks like when you get there?

Room: There are various metrics that people have put in place, and it depends which principles you're talking about. We obviously have security, which we've spoken about quite a lot here, but there are other principles: accuracy, retention, delete, transfers, and on and on.

But one of the metrics that entities are putting in, which is non-security controlled, is about the number of people who are successfully participating in training sessions and passing the little examination at the very end. The reason that key performance indicator (KPI) is important is that during enforcement cases, when things go wrong -- and there are lots and lots of these cases out there -- the same kind of challenges are presented by the regulators and by litigants, and that's an example of one of them.

So, when you're building your metrics and your KPIs, it's important to think not just about the measures that would achieve operational privacy and operational security, but also think about the metrics that people who would be adverse to you would understand: judges, regulators, litigants, etc. There are essentially two kinds of metrics, operational results metrics, but also the judgment metrics that people may apply to you.

Gardner: At HPE, do you have any examples or perhaps you can describe why we think that doing this correctly could get you into a better competitive business position? What is it about doing this that not only allows you to be legally compliant, but also puts you in an advantageous position in a market and in terms of innovation and execution?

Biggest sanction

Kemp: If I could quote some of our clients, especially in the Nordic Region, there are about six major reasons for paying strict and urgent attention to this particular subject. One of them, listening to my clients, has to do with compliance. That is the most obvious one. That is the one that has the biggest sanction.

But there are another five arguments -- I won't go into all of them -- which have to do with advancement of the business. For example, a major media company in Finland said, if we could only be able to say on our website that we were GDPR-compliant that would increase materially the customer belief in our respect for their information, and it would give us a market advantage. So it's actually advancing the business.

The second aspect, which I anticipated, but I've also heard from corporations, is that in due course, if it's not here already, there might be a case where governments would say that if you're not GDPR compliant, then you can’t bid on our contracts.

The third might be, as Tim was referring to earlier, what if you wanted to make best use of this information? There’s even a possibility of corporations taking the PII, making sure it's fully anonymous or pseudo-anonymized, and then mixing it with other freely available information, such as Facebook, and actually saying to a customer, David, we would like to use your PII, fully anonymized. We can prove to you that we have followed the PwC legal guidance. And furthermore, if we do use this information and use it for analytics, we might even want to pay you for this. What are you doing? You are increasing the bonding and loyalty with your customers.
In due course, if it's not here already, there might be a case where governments would say that if you're not GDPR compliant, then you can’t bid on our contracts.

So, we should think about the upsides of the business advancement, which ironically is coming out of a regulation, which may not be so obvious.

Gardner: Let’s close out with some practical hints as to how to get started, where to find more resources, both on the GDPR, but also how to attain a better data privacy capability. Any thoughts about where we go to begin the process?

Kemp: I would say that in the public domain, the EU is extremely good at promulgating information about the regulation itself coming in and providing some basic interpretation. But then, I would hand it on to Stewart in terms of what PwC Legal is already providing in the public domain.

Room: We have two accelerators that we've built to help entities go forward. The first is our GDPR Readiness Assessment Tool (RAT), and lots of multinationals run the RAT at the very beginning of their GDPR programs.
The Changing Face of Risk
Protect Your Digital Enterprise
Watch the Video to Get Started
What does it do? It asks 70 key questions against the two domains of operation and legal privacy. Privacy architecture and privacy principles are mapped into a maturity metric that assesses people’s confidence about where they stand. All of that is then mapped into the articles and recitals of the GDPR. Lots of our clients use the RAT.

The second accelerator is the PwC Privacy and Security Enforcement Tracker. We've been tracking the results of regulatory cases and litigation in this area over many years. That gives us a very granular insight into the real priorities of regulators and litigants in general.

Using those two tools at the very beginning gives you a good insight into where you are and what your risk priorities are.

Gardner: Last word to you, Tim. Any thoughts on getting started -- resources, places to go to get on your journey or further along?

The whole organization

Grieveson: You need to involve the whole organization. As I said earlier on, it’s not just about passing it over to the data-protection officer. You need to have the buy-in from every part of the organization. Clearly, working with organizations who understand the GDPR and the legal implications, such as the collaboration between PwC and HPE, is where I would go.

When I was in the seat as a CISO, I'm not a legal expert, so one of the first things that I did was go and get that expertise and brought it in. Probably the first place I would start is getting buy-in from the business and making sure that you have the right people around the table to help you on the journey.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Monday, June 27, 2016

CA streamlines cloud and hybrid IT infrastructure adoption through better holistic services monitoring

New capabilities in CA Unified Infrastructure Management (CA UIM) are designed to help enterprises adopt cloud more rapidly and better manage hybrid IT infrastructure heterogeneity across several major cloud environments.

Enterprises and SMBs are now clamoring for hybrid cloud benefits, due to an ability for focus on on apps and to gain speed for new business initiatives, says Stephen Orban, Global Head of Enterprise Strategy at AWS.

"Going cloud-first allows organizations to focus on the apps that make the business run, says Orban. Using hybrid computing, the burden of proof soon shifts to why should we use cloud for more of IT," he says.

As has been the case with legacy IT for decades, the better the overall management, the better the adoptions success, productivity, and return on investment (ROI) for IT systems and the apps they support -- no matter their location of IT architecture. This same truth is now being applied to solve the cloud heterogeneity problem, just as it did the legacy platforms heterogeneity problem. The total visibility solution may be even more powerful in this new architectural era.

Cloud-fist is business-first

The stakes are now even higher. As you migrate to the cloud, one weak link in a complex hybrid cloud deployment can ruin the end-user experience, says Ali Siddiqui, general manager, Agile Operations at CA, "By providing insight across the performance of all of an organization's IT resources in a single and unified view, CA UIM gives users the power to choose the right mix of modern cloud enablement technologies."
UIM gives users the power to choose the right mix of modern cloud enablement technologies that can best support new endeavors that can contribute to business growth.

CA UIM reduces complexity of hybrid infrastructures by providing visibility across on-premises, private-, and public-cloud infrastructures through a single console UI. Such insight enables users to adopt new technologies and expand monitoring configurations across existing and new IT resource elements. CA expects the solution to reduce the need for multiple monitoring tools. [Disclosure: CA is a sponsor of BriefingsDirect.]

"Keep your life simple from a monitoring and management perspective, regardless of your hybrid cloud [topology]," said Michael Morris, Senior Director Product Management, at CA Technologies in a recent webcast.

To grease the skids to hybrid cloud adoption, CA UIM now supports advanced performance monitoring of Docker containers, PureStorage arrays, Nutanix, hyperconverged systems, OpenStack cloud environments, and additional capabilities for Amazon Web Services (AWS) cloud infrastructures, CA Technologies announced last week.

CA is putting its IT systems management muscle behind the problem of migrating from data centers to the cloud, and then better supporting hybrid models, says Siddiqui. The "single pane of glass" monitoring approach that CA is delivering allows measurement and enforcement of service-level agreements (SLAs) before and after cloud migration. This way, continuity of service and IT value-add can be preserved and measured, he added.

Managing a cloud ecosystem

"Using advanced monitoring and management can significantly cut costs of moving to cloud," says Siddiqui.

Indeed, CA is working with several prominent cloud and IT infrastructure partners to make the growing diversity of cloud implementations a positive, not a drawback. For example, "Virtualization tools are too constrained to specific hypervisors, so you need total cloud visibility," says Steve Kaplan, Vice President of Client Strategy at Nutanix, of CA's new offerings.

And it's not all performance monitoring. Enhancements to CA UIM's coverage of AWS cloud infrastructures include billing metrics and support for additional services that provide deeper actionable insights on cloud brokering.

CA UIM now also provides:

  • Service-centric and unified analytics capabilities that rapidly identify the root cause of performance issues, resulting in a faster time to repair and better end-user experience
  • Out-of-the-box support for more than 140 on-premises and cloud technologies

  • Templates for easier configuring of monitors than can be applied to groups of disparate systems
What's more, to ensure the reliability of networks such as SDN/NFV that connect and scale hybrid environments, CA has also delivered CA Virtual Network Assurance, which provides a common view of dynamic changes across virtual and physical network stacks.

You may also be interested in:

Friday, June 24, 2016

Here's how two part-time DBAs maintain mobile app ad platform Tapjoy’s massive data needs

The next BriefingsDirect Voice of the Customer big data case study discussion examines how mobile app advertising platform Tapjoy handles fast and massive data -- some two dozen terabytes per day -- with just two part-time database administrators (DBAs).

Examine how Tapjoy’s data-driven business of serving 500 million global mobile users -- or more than 1.5 million add engagements per day, a data volume of a 120 terabytes -- runs with extreme efficiency.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about how high scale and complexity meets minimal labor for building user and advertiser loyalty we're joined by David Abercrombie, Principal Data Analytics Engineer at Tapjoy in San Francisco. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mobile advertising has really been a major growth area, perhaps more than any other type of advertising. We hear a lot about advertising waning, but not mobile app advertising. How does Tapjoy and its platform help contribute to the success of what we're seeing in the mobile app ad space?

Abercrombie: The key to Tapjoy’s success is engaging the users and rewarding them for engaging with an ad. Our advertising model is you engage with an ad and then you get typically some sort of reward: A virtual currency in the game you're playing or some sort of discount.

Abercrombie
We actually have the kind of ads that lead users to seek us out to engage with the ads and get their rewards.

Gardner: So this is quite a bit different than a static presented ad. This is something that has a two-way street, maybe multiple directions of information coming and going. Why the analysis? Why is that so important? And why the speed of analysis?

Abercrombie: We have basically three types of customers. We have the app publishers who want to monetize and get money from displaying ads. We have the advertisers who need to get their message out and pay for that. Then, of course, we have the users who want to engage with the ads and get their rewards.

The key to Tapjoy’s success is being able to balance the needs of all of these disparate uses. We can’t charge the advertisers too much for their ads, even though the monetizers would like that. It’s a delicate balancing act, and that can only be done through big-data analysis, careful optimization, and careful monitoring of the ad network assets and operation.

Gardner: Before we learn more about the analytics, tell us a bit more about what role Tapjoy plays specifically in what looks like an ecosystem play for placing, evaluating, and monetizing app ads? What is it specifically that you do in this bigger app ad function?

Ad engagement model

Abercrombie: Specifically what Tapjoy does is enable this rewarded ad engagement model, so that the advertisers know that people are going to be paying attention to their ads and so that the publishers know that the ads we're displaying are compatible with their app and are not going to produce a jarring experience. We want everybody to be happy -- the publishers, the advertisers, and the users. That’s a delicate compromise that’s Tapjoy’s strength.

Gardner: And when you get an end user to do something, to take an action, that’s very powerful, not only because you're getting them to do what you wanted, but you can evaluate what they did under what circumstances and so forth. Tell us about the model of the end user specifically. What is it about engaging with them that leads to the data -- which we will get to in a moment?
HPE Vertica
Community Edition
Start Your Free Trial Now
Abercrombie: In our model of the user, we talk about long-term value. So even though it may be a new user who has just started with us, maybe their first engagement, we like to look at them in terms of their long-term value, both to the publishers and the advertiser.

We don’t want people who are just engaging with the ad and going away, getting what they want and not really caring about it. Rather, we want good users who will continue their engagement and continue this process. Once again, that takes some fairly sophisticated machine-learning algorithms and very powerful inferences to be able to assess the long-term value.

As an example, we have our publishers who are also advertisers. They're advertising their app within our platform and for them the conversion event, what they are looking for, is a download. What we're trying to do is to offer them users who will not only download the game once to get that initial payoff reward, but will value the download and continue to use it again and again.
The people who are advertising don’t want people to just see their ads. They want people to follow up with whatever it is they're advertising.

So all of our models are designed with that end in mind -- to look at the long-term value of the user, not just the immediate conversion at this instant in time.

Gardner: So perhaps it’s a bit of a misnomer to talk about ads in apps. We're really talking about a value-add function in the app itself.

Abercrombie: Right. The people who are advertising don’t want people to just see their ads. They want people to follow up with whatever it is they're advertising. If it’s another app, they want good users for whom that app is relevant and useful.

That’s really the way we look at it. That’s the way to enhance the overall experience in the long-term. We're not just in it for the short-term. We're looking at developing a good solid user base, a good set of users who engage thoroughly.

Gardner: And as I said in my set-up, there's nothing hotter in all of advertising than mobile apps and how to do this right. It’s early innings, but clearly the stakes are very high.

A tough business

Abercrombie: And it’s a tough business. People are saturated. Many people don’t want ads. Some of the business models are difficult to master.

For instance, there may be a sequence of multiple ad units. There may be a video followed by another ad to download something. It becomes a very tricky thing to balance the financing here. If it was just a simple pass-through and we take a cut, that would be trivial, but that doesn't work in today's market. There are more sophisticated approaches, which do involve business risk.

If we reward the user, based on the fact that they're watching the video, but then they don't download the app, then we don't get money. So we have to look very carefully at the complexity of the whole interaction to make it as smooth and rewarding as possible, so that the thing works. That's difficult to do.

Gardner: So we're in a dynamic, fast-growing, fairly fresh, new industry. Knowing what's going to happen before it happens is always fun in almost any industry, but in this case, it seems with those high stakes and to make that monetization happen, it’s particularly important.
HPE Vertica
Community Edition
Start Your Free Trial Now
Tell me now about gathering such large amounts of data, being able to work with it, and then allowing analysis to happen very swiftly. How do you go about making that possible?

Abercrombie: Our data architecture is relatively standard for this type of clickstream operation. There is some data that can be put directly into a transactional database in real time, but typically, that's only when you get to the very bottom of the funnel, the conversion stuff. But all that clickstream stuff gets written, has JSON formatted log files, gets swept up by a queuing system, and then put into our data systems.

Our legacy system involved a homegrown queuing system, dumping data into HDFS. From there, we would extract and load CSVs into Vertica. As with so many other organizations, we're moving to more real-time operations. Our queuing system has evolved from a couple of different homegrown applications, and now we're implementing Apache Kafka.

We use Spark as part of our infrastructure, as sort of a hub, if you will, where data is farmed out to other systems, including a real-time, in-memory SQL database, which is fairly new to us this year. Then, we're still putting data in HDFS, and that's where the machine learning occurs. From there, we're bringing it into Vertica.

In Vertica -- and our Vertica cluster has two main purposes -- there is the operational data store, which has the raw, flat tables that are one row for every event, with the millisecond timestamps and the IDs of all the different entities involved.

From that operational data store, we do a pure SQL ETL extract into kind of an old-school star schema within Vertica, the same database.

Pure SQL

So our business intelligence (BI) ETL is pure SQL and goes into a full-fledged snowflake schema, moderately denormalized with all the old-school bells and whistles, the type 1, type 2, slowly changing dimensions. With Vertica, we're able to denormalize that data warehouse to a large degree.

Sitting on top of that we have a BI tool. We use MicroStrategy, for which we have defined our various metrics and our various attributes, and it’s very adept at knowing exactly which fact table and which dimensions to join.

So we have sort of a hybrid architecture. I'd say that we have all the way from real-time, in-memory SQL, Hadoop and all of its machine learning and our algorithmic pipelines, and then we have kind of the old-school data warehouse with the operational data store and the star schema.

Gardner: So a complex, innovative, custom architectural approach to this and yet I'm astonished that you are running and using Vertica in multiple ways with two part-time DBAs. How is it possible that you have minimal labor, given this topology that you just described?

Abercrombie: Well, we found Vertica very easy to manage. It has been very well-behaved, very stable.
In terms of ad-hoc users of our Vertica database, we have well over 100 people who have the ability to run any query they want at any time into the Vertica database.

For instance, we don’t even really use the Management Console, because there is not enough to manage. Our cluster is about 120 terabytes. It’s only on eight nodes and it’s pretty much trouble free.

One of the part-times DBAs deals with kind of more operating-system level stuff --  patches, cluster recovery, those sorts of issues. And the other part-time DBA is me. I deal more with data structure design, SQL tuning and Vertica training for our staff.

In terms of ad-hoc users of our Vertica database, we have well over 100 people who have the ability to run any query they want at any time into the Vertica database.

When we first started out, we tried running Vertica in Amazon EC2. Mind you, this was four or five years ago. Amazon EC2 was not where it is today. It failed. It was very difficult to manage. There were perplexing problems that we couldn’t solve. So we moved our Vertica and essentially all of our big-data data systems out of the cloud onto dedicated hardware, where they are much easier to manage and much easier to bring the proper resources.

Then, at one time in our history, when we built a dedicated hardware cluster for Vertica, we failed to heed properly the hardware planning guide and did not provision enough disk I/O bandwidth. In those situations, Vertica is unstable, and we had a lot of problems.

But once we got the proper disk I/O, it has been smooth sailing. I can’t even remember the last time we even had a node drop out. It has been rock solid. I was able to go on a vacation for three weeks recently and know that there would be no problem, and there was no problem.

Gardner: The ultimate key performance indicator (KPI), "I was able to go on vacation."

Fairly resilient

Abercrombie: Exactly. And with the proper hardware design, HPE Vertica is fairly resilient against out-of-control queries. There was a time when half my time was spent monitoring for slow queries, but again, with the proper hardware, it's smooth sailing. I don’t even bother with that stuff anymore.

Our MicroStrategy BI tool writes very good SQL. Part of the key to our success with this BI portion is designing the Vertica schema and the MicroStrategy metadata layer to take advantage of each other’s strengths and avoid each other’s weaknesses. So that really was key to the stable, exceptional performance we get. I basically get no complaints of slow queries from my BI tool. No problem.

Gardner: The right kind of problem to have.

Abercrombie: Yes.

Gardner: Okay, now that we have heard quite a bit about how you are doing this, I'd like to learn, if I could, about some of the paybacks when you do this properly, when it is running well, in terms of SQL queries, ETL load times reduction, the ability for you to monetize and help your customers create better advertising programs that are acceptable and popular. What are the paybacks technically and then in business terms?
The only way to get that confidence was by having highly accurate data and extensive quality control (QC) in the ETL.

Abercrombie: In order to get those paybacks, a key element was confidence in the data, the results that we were shipping out. The only way to get that confidence was by having highly accurate data and extensive quality control (QC) in the ETL.

What that also means is that as a product is under development and when it’s not ready yet, the instrumentation isn’t ready, that stuff doesn’t make it into our BI tool. You can only get that stuff from ad hoc.

So the benefit has been a very clear understanding of the day-to-day operations of our ad network, both for our internal monitoring to know when things are behaving properly, when the instrumentation is working as expected, and when the queues are running, but also for our customers.

Because of the flexibility that we can do from a traditional BI system with 500 metrics, over a couple of dozen dimensions, our customers, the publishers and the advertisers, get incredible detail, customized exactly the way they need for ingestion into their systems or to help them understand how Tapjoy is serving them. Again, that comes from confidence in the data.

Gardner: When you have more data and better analytics, you can create better products. Where might we look next to where you take this? I don’t expect you to pre-announce anything, but where can you now take these capabilities as a business and maybe even expand into other activities on a mobile endpoint?

Flexibility in algorithms

Abercrombie: As we expand our business and move into new areas, what we really need is flexibility in our algorithms and the way we deal with some of our real-time decision making.

So one area that’s new to us this year is the in-memory SQL database like MemSQL. Some of our old real-time ad optimization was based on pre-calculating data and serving it up through HBase KeyValue, but now, where we can do real-time aggregation queries using SQL, that is easy to understand, easy to modify, very expressive and very transparent. It gives us more flexibility in terms of fine-tuning our real-time decision-making algorithms, which is absolutely necessary.

As an example, we acquired a company in Korea called 5Rocks that does app tech and that tracks the users within the app, like what level they're on, or what activities they're doing and what they enjoy, with an eye towards in-app purchase optimization.
HPE Vertica
Community Edition
Start Your Free Trial Now
And so we're blending the in-app purchase optimization along with traditional ad network optimization, and the two have different rules and different constraints. So we really need the flexibility and expressiveness of our real-time decision making systems.

Gardner: One last question. You mentioned machine learning earlier. Do you see that becoming more prominent in what you do and how you're working with data scientists, and how might that expand in terms of where you employ it?

Abercrombie: Tapjoy started with machine learning. Our data scientists are machine learning. Our productive algorithm team is about six times larger than our traditional Vertica BI team. Mostly what we do at Tapjoy is predictive analytics and various machine-learning things. So we wouldn't be alive without it. And we expanded. We're not shifting in one direction or another. It's apples and oranges, and there's a place for both.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tuesday, June 21, 2016

Expert panel explores the new reality for cloud security and trusted mobile apps delivery

The next BriefingsDirect thought leadership panel discussion focuses on the heightened role of security in the age of global cloud and mobile delivery of apps and data.

As enterprises and small to medium-sized businesses (SMBs) alike weigh the balance of apps and convenience with security -- a new dynamic is emerging. Security concerns increasingly dwarf other architecture considerations.

Yet advances in thin clients, desktop virtualization (VDI), cloud management services, and mobile delivery networks are allowing both increased security and edge applications performance gains.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn more about the new reality for end-to-end security for apps and data, please welcome our panel: Stan Black, Chief Security Officer at Citrix; Chad Wilson, Director of Information Security at Children's National Health System in Washington, DC; Whit Baker, IT Director at The Watershed in Delray Beach, Florida; Craig Patterson, CEO of Patterson and Associates in San Antonio, Texas, and Dan Kaminsky, Chief Scientist at White Ops in San Francisco. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Stan, a first major use case of VDI was the secure, stateless client. All the data and apps remain on the server, locked down, controlled. But now that data is increasingly mobile, and we're all mobile. So, how can we take security on the road, so to speak? How do we move past the safe state of VDI to full mobile, but not lose our security posture?

Black: Probably the largest challenge we all have is maintaining consistent connectivity. We're now able to keep data locally or make it highly extensible, whether it’s delivered through the cloud or a virtualized application. So, it’s a mix and a blend. But from a security lens, each one of those of service capabilities has a certain nuance that we need to be cognizant of while we're trying to protect data at rest, in use, and in motion.

Gardner: I've heard you speak about bring your own device (BYOD), and for you, BYOD devices have ended up being more secure than company-provided devices. Why do you think that is?

Caring for assets

Black: Well, if you own the car, you tend to take care of it. When you have a BYOD asset, you tend to take care of it, because ultimately, you're going to own that, whether it’s purchased for you with a retainer or what have you.

Black
Often, corporate-issued assets are like a car rental. You might not bring it back the same way you took it. So it has really changed quite a bit. But the containerization gives us the ability to provide as much, if not more, control in that BYOD asset.

Gardner: This also I think points out the importance of behaviors and end-user culture and thinking about security, acting in certain ways. Let's go to you, Craig. How do we get that benefit of behavior and culture as we think more about mobility and security?

Patterson: When we look at mobile, we've had people who would have a mobile device out in the field. They're accustomed to being able to take an email, and that email may have, in our situation, private information -- Social Security numbers, certain client IDs -- on it, things that we really don't want out in the public space. The culture has been, take a picture of the screen and text it to someone else. Now, it’s in another space, and that private information is out there.

You go from working in a home environment, where you text everything back and forth, to having secure information that needs to be containerized, shrink-wrapped, and not go outside a certain control parameter for security. Now, you're having a culture fight [over] utilization. People are accustomed to using their devices in one way and now, they have to learn a different way of using devices with a secure environment and wrapping. That’s what we're running into.

Gardner: We've also heard at the recent Citrix Synergy 2016 in Las Vegas that IT should be able to increasingly say "Yes," that it's an important part of getting to better business productivity.
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Dan, how do we get people to behave well in secure terms, but not say "No"? Is there a carrot approach to this?

Kaminsky: Absolutely. At the end of the day, our users are going to go ahead and do stuff they need to get their jobs done. I always laugh when people say, "I can’t believe that person opened a PDF from the Internet." They work in HR. Their job is to open resumes. If they don’t open resumes, they're going to lose their job and be replaced by someone else.

Kaminsky
The thing I see a lot is that these software-as-a-service (SaaS) providers are being pressed into service to provide the things that people need. It’s kind of like a rogue IT or an outsourced IT, with or without permission.

The unusual realization that I had is that all these random partners we're getting have random policies and are storing data. We hear a lot of stuff about the Internet of Things (IoT), but I don't know any toasters that have my Social Security number. I know lots of these DocuSign, HelloSign systems that are storing really sensitive documents.

Maybe the solution, if we want people to implement our security technologies, or at least our security policies, is to pay them. Tell them, "If you actually have attracted our users, follow these policies, and we'll give you this amount of money per day, per user, automatically through our authentication layer." It sounds ridiculous, but you have to look at the status quo. The status quo is on fire, and maybe we can pay people to put out their fires.

Quid pro quo

Gardner: Or perhaps there are other quid pro quos that don't involve money? Chad, you work at a large hospital organization and you mentioned that you're 100 percent digital. How did you encourage people with the carrot to adhere to the right policies in a challenging environment like a hospital?

Wilson: We threw out the carrot-and-stick philosophy and just built a new highway. If you're driving on a two-lane highway, and it's always congested, and you want somebody to get there faster, then build a new highway that can handle the capacity and the security. Build the right on- and off-ramps to it and then cut over.

Wilson
We've had an electronic medical record (EMR) implementation for a while. We just finished up rolling out to all of our ambulatory spaces for electronic medical record. It's all delivered through virtualization on that highway that we built. So, they have access to it wherever they need it.

Gardner: It almost sounds like you're looking at the beginning bowler’s approach, where you put rails up on the gutters, so you can't go too far afield, whether you wish to or not. Whit Baker, tell us a little bit about The Watershed and how you view security behavior. Is it rails on the gutters, carrots or sticks, how does it go?

Baker: I would say rails on the gutters for us. We've completely converted everything to a VDI environment. Whether they're connecting with a laptop, with broadband, or their own home computer or mobile device, that session is completely bifurcated from their own operating system.

So, we're not really worried. Your desktop machine can be completely loaded with malware and whatnot, but when you open that session, you're inside of our system. That's basically how we handle the security. It almost doesn't require the users to be conscious of security.

Baker
At the same time, we're still afraid of attachments and things like that. So, we do educational type things. When we see some phishing emails come in, I'll send out scam alerts and things like that to our employees, and they're starting to become self-aware. They are starting to ask, "Should I even open this?" -- those sort of things.

So, it's a little bit of containerization, giving them some rails that they can bounce off of, and education.

Gardner: Stan, thinking about other ways that we can encourage good security posture in the mobility era, authentication certainly comes to mind, multi-factor authentication (MFA). How does that play into this keeping people safe?

Behavior elements

Black: It’s a mix of how we're going to deliver the services, but it's also a mix of the behavior elements and the fact that now technology has progressed so much that you can provide a user an entire experience that they actually enjoy. It gives them what they need, inside of a secure session, inside of a secure socket layer, with the inability to go outside of those bowling lanes, if they're not authorized to do so.

Additionally, authentication technologies have come a long way from hard tokens that we used to wear. I've seen people with four, five, or six of them, all in one necklace. I think I might have been one of them.
Authentication technologies have come a long way from hard tokens that we used to wear.

Multi-factor authentication and the user interface  are all pieces of information that aren't tied to the person's privacy or that individual, like their Social Security Number, but it’s their user experience enabling them to connect seamlessly. Often, when you have a help-desk environment, as an example, you put a time-out on their system. They go from one phone call to another phone call and then they have to log back in.

The interfaces that we have now and the MFA, the simple authentication, the simplified side on all of those, enable a person, depending upon what their role is, to connect into the environment they need to do their job quickly and easily.

Gardner: You mentioned user experience, and maybe that’s the quid pro quo. You get more user experience benefits if you take more precautions with how you behave using your devices.

Dan, any thoughts on where we go with authentication and being able to say, Yes, and encourage people to do the right thing?
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Kaminsky: I cannot emphasize how important usability is in getting security wins. We've had some major ones. We moved people from Telnet to SSH. Telnet was unencrypted and was a disaster. SSH is encrypted. It is actually the thing people use now, because if you jump through a few hoops, you stopped having to type in a password.

You know what VPNs meant? VPNs meant you didn't have to drive into the office on a Sunday. You could be at home and fix the problem, and hours became minutes or seconds. Everything that we do that really works involves making things more useable and enabling people. Security is giving you permission to do this thing that used to be dangerous.
Security is giving you permission to do this thing that used to be dangerous.

I actually have a lot of hope in the mobility space, because a lot of these mobile environments and operating systems are really quite secure. You hand someone an iPad, and in a year, that iPad is still going to work. There are other systems where you hand someone a device and that device is not doing so well a year from now.

So there are a lot more controls and stability from some of these mobile things that people actually like to use more, and they turn out to also be significantly more secure.

Gardner: Craig, as we're also thinking about ways of keeping people on the straight and narrow path, we're getting more intelligent networks. We're starting to get more data and analytics from those devices and we're able to see what goes on in that network in high detail.

Tell us about the ways in which we can segment and then make zones for certain purposes that may come and go based on policies. Basically, how are intelligent networks helping us provide that usability and security?

Access to data

Patterson: The example that comes to my mind is that in many of the industries, we have partners who come on site for a short period of time. They need access to data. They might be doing inspections for us and they'll be going into a private area, but we don't want them to take certain photos, documents and other information off site after a period of time.

Patterson
Containerizing data and having zones allows a person to have access while they're on premises, within a certain "electronic wire fence," if you will, or electronic guardrails. Once they go outside of that area, that data is no longer accessible or they've been logged off the system and they no longer have access to those documents.

We had kind of an old-fashioned example where people think they are more secure, because they don't know what they're losing. We had people with file cabinets that were locked and they had the key around their neck. They said, "Why should we go to an electronic documents system where I can see when you viewed it, when you downloaded it, where you moved that document to?" That kind of scared some people.

Then, I walked in with half their file cabinet and I said, "You didn’t even know these were gone, but you felt secure the whole time. Wouldn’t you rather know that it was gone and have been able to institute some security protocols behind it?"

A lot of it goes to usability. We want to make things usable and we have to have access to it, but at the same time, those guardrails include not only where we can access it and at what time, but for how long and for what purposes.
Once they go outside of that area, that data is no longer accessible or they've been logged off the system and they no longer have access to those documents.

We have mobile devices for which we need to be able to turn the camera functions off in certain parts of our facility. For mobile device management, that's helpful. For BYOD, that becomes a different challenge, and that's when we have to handle giving them a device that we can control, as opposed to BYOD.

Gardner: Stan, another major trend these days is the borderless enterprise. We have supply chains, alliances, ecosystems that provide solutions, an API-first mentality, and that requires us to be able to move outside and allow others to cross over. How does the network-intelligence factor play into making that possible so that we can say, Yes, and get a strong user experience regardless of which company we're actually dealing with?

Black: I agree with the borderless concept. The interesting part of it, though, is with networks knowing where they're connecting to physically. The mobile device has over 20 sensors in it. When you take all of that information and bring it together with whatever APIs are enabled in the applications, you start to have a very interesting set of capabilities that we never had before.

A simple example is, if you're a database administrator and you're administering something inside the European Union (EU), there are very stringent privacy laws that make it so you're not allowed to do that.

We don’t have to make it that we have to train the person or make it more difficult for them; we simply disable the capability through geofencing. When one application is talking securely through a socket, all the way to the back end, from a mobile device, all the way into the data center, you have pretty darn good control. You can also separate duties; system administration being one function, whereas database administration is another very different thing. One set doesn't see the private data; one set has very clear access to it.

Getting visibility

Gardner: Chad, you mentioned how visibility is super important for you and your organization. Tell me a bit about moving beyond the user implications. What about the operators? How do you get that visibility and keep it, and how important is that to maintaining your security posture?

Wilson: If you can't see it, you can’t protect it. No matter how much visibility we get into the back end, if the end user doesn't adopt the application or the virtualization that we've put in place or the highway that we've built, then we're not going to see the end-to-end session. They're going to continue to do workarounds.

So, usability is very important to end-user adoption and adopting the new technologies and the new platforms. Systems have to be easy for them to access and to use. From the back-end, the visibility piece, we look at adopting technology strategically to achieve interoperability, not just point products here and there to bolt them on.
So, instead of thinking about things from a device-to-device-to-device perspective, we're thinking about one holistic service-delivery platform, and that's the new highway that provides that visibility.

A strategic innovation and a strategic procurement around technology and partnership, like we have with Citrix, allows us to have a consistent delivery of the application and the end user experience, no matter what device they go to, and where they access from in the world. On the back side, that helps us, because we can have that end-to-end visibility of where our data is heading, the authentication right upfront, as well as all the pieces and parts of the network that go into play to deliver that experience.

So, instead of thinking about things from a device-to-device-to-device perspective, we're thinking about one holistic service-delivery platform, and that's the new highway that provides that visibility.

Gardner: Whit, we've heard a lot about the mentality that you should always assume someone unwanted is in your network. Monitoring and response is one way of limiting that. How does your organization acknowledge that bad things can happen, but that you can limit that, and how important is monitoring and response for you in reducing damage?

Baker: In our case, we have several layers of user experience. Through policy, we only allow certain users to do certain things. We're a healthcare system, but we have various medical personnel; doctors, nurses and therapists, versus people in our corporate billing area and our call center.  All of those different roles are basically looking only at the data that they need to be accessing, and through policy, it’s fairly easy to do.

Gardner: Stan, on the same subject, monitoring and response, assuming that people are in, what is Citrix seeing in the field, and how are you giving that response time as low a latency as possible?

Standard protocol

Black: The standard incident-response protocol is identify, contain, control, and communicate. We're able to shrink what we need to identify. We're able to connect from end-to-end, so we're able to communicate effectively, and we've changed how much data we gather regarding transmissions and communications.

If you think about it, we've shrunk our tech surface, we've shrunk our vulnerable areas, methods, or vectors by which people can enter in. At the same time, we've gained incredibly high visibility and fidelity into what is supposed to be going over a wire or wireless, and what is not.

We're now able to shrink the identify, contain, control, and communicate spectrum to a much shorter area and focus our efforts with really smart threat intelligence and incident response people versus everyone in the IT organization and everyone in security. Everyone is looking at the needle in the haystack; now we just have a smaller stack of needles.

Patterson: I had a thought on that, because as we looked at a cloud-first strategy, one of the issues that we looked at was, "We have a voice-over-IP system in the cloud, we have Azure, we have Citrix, we have our NetScaler. What about our firewalls now, and how do we actually monitor intrusion?"
Citrix and Microsoft are helping us with that in our environments, but those are still open questions for us. We're not entirely satisfied with the answers yet.

We have file attachments and emails coming through in ways that aren’t on our on-premises firewall and not with all our malware detection. So, those are questions that I think all of us are trying to answer, because now we're creating known unknowns and really unknown unknowns. When it happens, we're going to say, "We didn’t know that that part could happen."

That’s where part of the industry is, too. Citrix and Microsoft are helping us with that in our environments, but those are still open questions for us. We're not entirely satisfied with the answers yet.

Gardner: Dan, one of the other ways that we want to be able to say, Yes, to our users and increase their experiences as workers is to recognize the heterogeneity -- any cloud, any device, multiple browser types, multiple device types. How do you see the ability to say, Yes, to vast heterogeneity, perhaps at a scale we've never seen before, but at the same time, preserve that security and keep those users happy?

Kaminsky: The reason we have different departments and multiple teams is because different groups have different requirements. They have different needs that are satisfied in ways that we don't necessarily understand. It’s not the heterogeneity that bothers us; it’s the fact that a lot of systems have different risks. We can merge the risks, or simultaneously address them with consistent technologies, like containerization and virtualization, like the sort of centralization solutions out there.

People are sometimes afraid of putting all their eggs in one basket. I'll take one really well-built basket over 50,000 totally broken ones. What I see is, create environments in which users can use whatever makes their job work best, and go ahead and realize that it's not actually the fact that the risks are that distinct, that they are that unique. The risk patterns of the underlying software are less diverse than the software itself.
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Gardner: Stan, most organizations that we speak to say they have at least six, perhaps more, clouds. They're using all sorts of new devices. Citrix has recently come out with Raspberry Pi at less than a $100 to be a viable Windows 10 endpoint. How do we move forward and keep the options open for any cloud and any device?

Multitude of clouds

Black: When you look at the cloud, there is a multitude of public clouds. Many companies have internal clouds. We've seen all of this hyperconvergence, but what has blurred over time are the controls between whether it’s a cloud, whether it’s the enterprise, and whether it’s mobile.

Again, some of what you've seen has been how certain technologies can fulfill controls between the enterprise and the cloud, because cloud is nimble, it’s fast, and it's great.

At the same time, if you don't control it, don’t manage it, or don't know what you have in the cloud, which many companies struggle with, your risk starts to sprawl and you don't even know it's happened.

So it's not adding difficult controls, what I would call classic gates, but transparency, visibility, and thresholds. You're allowed to do this between here and here. An end user doesn't know those things are happening.
Also, weaving analytics into every connection, knowing what that wire is supposed to look like, what that packet is supposed to look like gives you a heck of a lot more control than we've had for decades.

Also, weaving analytics into every connection, knowing what that wire is supposed to look like, what that packet is supposed to look like gives you a heck of a lot more control than we've had for decades.

Gardner: Chad, for you and your organization, how would you like to get security visibility in terms of an analytic dashboard, visualization, and alerts? What would you like to see happen in terms of that analytics benefit?

Wilson: It starts with population health and the concept behind it. Population health takes in all the healthcare data, puts it into a data warehouse, and leverages analytics to be able to show trends with, say, kids presenting with asthma or patients presenting with asthma across their lifespan and other triggers. That goes to quality of care.

The same concept should be applied to security. When we bring that data together, all the various logs, all of the various threat vectors and what we are seeing, not just signatures, but we're able to identify trends, and how folks are doing it, how the bad guys are doing it. Are the bad guys single-vectored or have they learned the concept of combined arms, like our militaries have? Are they able to put things together to have better impact? And where do we need to put things together to have better protection?

We need to change the paradigm, so when they show their hand once, it doesn't work anymore. The only way that we can do that is by being able to detect that one time when they show their hand. It's getting them to do one thing to show how they are going to attack us. To do that, we have to pull together all the logs, all of the data, and provide analytics and get down to behavior; what is good behavior, what is bad behavior.

That's not a signature that you're detecting for malware; that is a behavior pattern. Today I can do one thing, and tomorrow I can do it differently. That's what we need to be able to get to.

Getting information

Patterson: I like the illustration that was just used. What we're hoping for with the cloud strategy is that, when there's an attack on one part of the cloud, even if it's someone else that’s in Citrix or another cloud provider, then that is shared, whereas before we have had all these silos that need to be independently secured.

Now, the windows that are open in these clouds that we're sharing are going to be ways that we can protect each one from the other. So, when one person attacks Citrix a certain way, Azure a certain way, or AWS a certain way, we can collectively close those windows.
I want to know where the windows are open and where the heat loss went or where there was air intrusion.

What I like to see in terms of analytics is, and I'll use kind of a mechanical engineering approach, I want to know where the windows are open and where the heat loss went or where there was air intrusion. I would like to see, whether it went to an endpoint that wasn't secured or that I didn't know about. I'd like to know more about what I don't know in my analytics. That’s really what I want analytics for, because the things that I know I know well, but I want my analytics to tell me what I don't know yet.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in: