Wednesday, May 27, 2015

The Open Group panel explores how standards thwart thorny global cybersecurity issues

How can global enterprise cybersecurity be improved for better enterprise integrity and risk mitigation? What constitutes a good standard, or set of standards, to help? And how can organizations work to better detect misdeeds, rather than have attackers on their networks for months before being discovered?

These questions were addressed during a February panel discussion at The Open Group San Diego 2015 conference. Led by moderator Dave Lounsbury, Chief Technology Officer, The Open Group, the speakers included Edna Conway, Chief Security Officer for Global Supply Chain, Cisco; Mary Ann Mezzapelle, Americas CTO for Enterprise Security Services, HP; Jim Hietala, Vice President of Security for The Open Group, and Rance DeLong, Researcher into Security and High Assurance Systems, Santa Clara University.

Download a copy of the full transcript. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.] 

Here are some excerpts:

Dave Lounsbury: We've heard about the security, cybersecurity landscape, and, of course, everyone knows about all the many recent breaches. Obviously, the challenge is growing in cybersecurity. So, I want to start asking a few questions, directing the first one to Edna Conway.
Lounsbury

We've heard about the Verizon Data Breach Investigation of DBIR report that catalogs the various attacks that have been made over the past year. One of the interesting findings was that in some of these breaches, the attackers were on the networks for months before being discovered.

What do we need to start doing differently to secure our enterprises?
Attend The Open Group Baltimore 2015
July 20-23, 2015
Early bird registration ends June 19
Edna Conway: There are a couple of things. From my perspective, continuous monitoring is absolutely essential. People don't like it because it requires rigor, consistency, and process. The real question is, what do you continuously monitor?

It’s what you monitor that makes a difference. Access control and authentication, should absolutely be on our radar screen, but I think the real ticket is behavior. What kind of behavior do you see authorized personnel engaging in that should send up as an alert? That’s a trend that we need to embrace more.

Conway
The second thing that we need to do differently is drive detection and containment. I think we try to do that, but we need to become more rigorous in it. Some of that rigor is around things like, are we actually doing advanced malware protection, rather than just detection?

What are we doing specifically around threat analytics and the feeds that come to us: how we absorb them, how we mine them, and how we consolidate them?

The third thing for me is how we get it right. I call that team the puzzle solvers. How do we get them together swiftly?

How do you put the right group of experts together when you see a behavior aberration or you get a threat feed that says that you need to address this now? When we see a threat injection, are we actually acting on the anomaly before it makes its way further along in the cycle?

Executive support

Mary Ann Mezzapelle: Another thing that I'd like to add is making sure you have the executive support and processes in place. If you think how many plans and tests and other things that organizations have gone through for business continuity and recovery, you have to think about that incident response. We talked earlier about how to get the C suite involved. We need to have that executive sponsorship and understanding, and that means it's connected to all the other parts of the enterprise.

Mezzapelle
So it might be the communications, it might be legal, it might be other things, but knowing how to do that and being able to respond to it quickly is also very important.

Rance DeLong: I agree on the monitoring being very important as well as the question of what to monitor. There are advances being made through research in this area, both modeling behavior -- what are the nominal behaviors -- and how we can allow for certain variations in the behavior and still not have too many false positives or too many false negatives.

Also on a technical level, we can analyze systems for certain invariants, and these can be very subtle and complicated invariance formulas that may be pages long and hold on the system during its normal operation. A monitor can be monitoring both for invariance, these static things, but they can also be monitoring for changes that are supposed to occur and whether those are occurring the way they're supposed to.

Jim Hietala: The only thing I would add is that I think it’s about understanding where you really have risk and being able to measure how much risk is present in your given situation.

DeLong
In the security industry, there has been a shift in mindset away from figuring that we can actually prevent every bad thing from happening towards really understanding where people may have gotten into the system. What are those markers that something is gone awry and reacting to that in a more timely way -- so detective controls, as opposed to purely preventative type controls.

Lounsbury: We heard from Dawn Meyerriecks earlier about the convergence of virtual and physical and how that changes the risk management game. And we heard from Mary Ann Davidson about how she is definitely not going to connect her house to the Internet.

So this brings new potential risks and security management concerns. What do you see as the big Internet of Things (IoT) security concerns and how does the technology industry assess and respond to those?

Hietala: In terms of IoT, the thing that concern me is that many of the things that we've solved at some level in IT hardware, software, and systems seemed to have been forgotten by many of the IoT device manufacturers.

Hietala
We have pretty well thought out processes for how we identify assets, we patch things, and we deal with security events and vulnerabilities that happen. The idea that, particularly on the consumer class of IoT type devices, we have devices out there with IP interfaces on them, and many of the manufacturers just haven’t had a thought of how they are going to patch something in the field, I think should scare us all to some degree.

Maybe it is, as Mary Ann mentioned, the idea that there are certain systemic risks that are out there that we just have to sort of nod our head and say that that’s the way it is. But certainly around really critical kinds of IoT applications, we need to take what we've learned in the last ten years and apply it to this new class of devices.

New architectural approach

DeLong: I'd like to add to that. We need a new architectural approach for IoT that will help to mitigate the systemic risks. And echoing the concerns expressed by Mary Ann a few minutes ago, in 2014, Europol, which is an organization that tracks criminal  risks of various kinds, predicted by the end of 2014, murder by Internet, in the context of Internet of Things. It didn't happen, but they predicted it, and I think it's not farfetched that we may see it over time.

Lounsbury: What do we really know actually? Edna, do you have any reaction on that one?

Conway: Murder by Internet. That’s the question you gave me, thanks. Welcome to being a former prosecutor. The answer is on their derrieres. The reality is do we have any evidentiary reality to be able to prove that?

I think the challenge is one that's really well-taken, which is we are probably all in agreement on, the convergence of these devices. We saw the convergence of IT and OT and we haven't fixed that yet.

We are now moving with IoT into a scalability of the nature and volume of devices. To me, the real challenge will be to come up with new ways of deploying telemetry to allow us to see all the little crevices and corners of the Internet of Things, so that we can identify risks in the same way that we have. We haven't mastered 100 percent, but we've certainly tackled predominately across the computer networks and the network itself and IT. We're just not there with IoT.

Mezzapelle: Edna, it also brings to mind another thing -- we need to take advantage of the technology itself. So as the data gets democratized, meaning it's going to be everywhere -- the velocity, volume, and so forth -- we need to make sure that those devices can maybe be self-defendable, or maybe they can join together and defend themselves against other things.
The real challenge will be to come up with new ways of deploying telemetry to allow us to see all the little crevices and corners of the Internet of Things.

So we can't just apply the old-world thinking of being able to know everything and control everything, but to embed some of those kinds of characteristics in the systems, devices, and sensors themselves.

Lounsbury: We've heard about the need. In fact, Ron Ross mentioned the need for increased public-private cooperation to address the cybersecurity threat. Ron, I would urge you to think about including voluntary consensus standards organizations in that essential partnership you mentioned to make sure that you get that high level of engagement, but of course, this is a broad concern to everybody.

President Obama has made a call for legislation on enabling cybersecurity and information sharing, and one of the points within that was shaping a cyber savvy workforce and many other parts of public-private information sharing.

So what more can be done to enable effective public-private cooperation on this and what steps can we, as a consensus organization, take to actually help make that happen? Mary Ann, do you want to tackle that one and see where it goes?

Collaboration is important

Mezzapelle: To your point, collaboration is important and it's not just about the public and the private partnership. It also means within an industry sector or in your supply chain and third-party. It's not just about the technology; it's also about the processes, and being able to communicate effectively, almost at machine speed, in those areas.

So you think about the people, the processes, and the technology, I don't think it's going to be solved by government. I think I agree with the previous speakers when they were talking about how it needs to be more hand-in-hand.

There are some ways that industry can actually lead that. We have some examples, for instance what we are doing with the Healthcare Forum and with the Mining and Minerals Forum. That might seem like a little bit, but it's that little bit that helps, that brings it together to make it easier for that connection.

It's also important to think about, especially with the class of services and products that are available as a service, another measure of collaboration. Maybe you, as a security organization, determine that your capabilities can't keep up with the bad guys, because  they have more money, more time, more opportunity to take advantage, either from a financial perspective or maybe even from a competitive perspective, for your intellectual property.
You need those product vendors or you might need a services vendor to really be able to fill in the gaps, so that you can have that kind of thing on demand.

You really can't do it yourself. You need those product vendors or you might need a services vendor to really be able to fill in the gaps, so that you can have that kind of thing on demand. So I would encourage you to think about that kind of collaboration through partnerships in your whole ecosystem.

DeLong: I know that people in the commercial world don't like a lot of regulation, but I think government can provide certain minimal standards that must be met to raise the floor. Not that companies won't exceed these and use that as a competitive basis, but if minimum is set in regulations, then this will raise the whole level of discourse.

Conway: We could probably debate over a really big bottle of wine whether it's regulation or whether it's collaboration. I agree with Mary Ann. I think we need to sit down and ask what are the biggest challenges that we have and take bold, hairy steps to pull together as an industry? And that includes government and academia as partners.

But I will give you just one example: ECIDs. They are out there and some are on semiconductor devices. There are some semiconductor companies that already use them, and there are some that don't.

A simple concept would be if we could make sure that those were actually published on an access control base, so that we could go and see whether the ECID was actually utilized, number one.

Speeding up standards

Lounsbury: Okay, thanks. Jim, I think this next question is about standards evolution. So we're going to send it to someone from a standards organization.

The cyber security threat evolves quickly, and protection mechanisms evolve along with them. It's the old attacker-defender arms race. Standards take time to develop, particularly if you use a consensus process. How do we change the dynamic? How do we make sure that the standards are keeping up with the evolving threat picture? And what more can be done to speed that up and keep it fresh?

Hietala: I'll go back to a series of workshops that we did in the fall around the topic of security automation. In terms of The Open Group's perspective, standards development works best when you have a strong customer voice expressed around the pain points, requirements, and issues.

We did a series of workshops on the topic of security automation with customer organizations. We had maybe a couple of hundred inputs over the course of four workshops, three physical events, and one that we did on the web. We collected that data, and then are bringing it to the vendors and putting some context around a really critical area, which is how do you automate some of the security capabilities so that you are responding faster to attacks and threats.
Standards development works best when you have a strong customer voice expressed around the pain points, requirements, and issues.

Generally, with just the idea that we bring customers into the discussion early, we make sure that their issues are well-understood. That helps motivate the vendor community to get serious about doing things more quickly.

One of the things we heard pretty clearly in terms of requirements was that multi-vendor interoperability between security components is pretty critical in that world. It's a multi-vendor world that most of the customers are living with. So building interfaces that are open, where you have got interoperability between vendors, is a really key thing.

DeLong: It's a really challenging problem, because in emerging technologies, where you want to encourage and you depend upon innovation, it's hard to establish a standard. It's still emerging. You don't know what's going to be a good standard. So you hold off and you wait and then you start to get innovation, you get divergence, and then bringing it back together ultimately takes more energy.

Lounsbury: Rance, since you have got the microphone, how much of the current cybersecurity situation is attributed to poor blocking and tackling in terms of the basics, like doing security architecture or even having a method to do security architecture, things like risk management, which of course Jim and the Security Forum have been looking into? And not only that, what about translating that theory into operational practice and making sure that people are doing it on a regular basis?

DeLong: A report I read on SANs, a US Government issued report on January 28 of this year, said that that many, or most, or all of our critical weapons systems contain flaws and vulnerabilities. One of the main conclusions was that, in many cases, it was due to not taking care of the basics -- the proper administration of systems, the proper application of repairs, patches, vulnerability fixes, and so on. So we need to be able to do it in critical systems as well as on desktops.

Open-source crisis

Mezzapelle: You might consider the open-source code crisis that happened over the past year with Heartbleed, where the benefits of having open-source code is somewhat offset by the disadvantages.

That may be one of the areas where the basics need to be looked at. It’s also because those systems were created in an environment when the threats were at an entirely different level. That’s a reminder that we need to look to that in our own organization.

Another thing is in mobile applications, where we have such a rush to get out features, revs, and everything like that, that it’s not entirety embedded in the system’s lifecycle or in a new startup company. Those are the some of the other basic areas where we find that the basics, the foundation, needs to be solidified to really help enhance the security in those areas.

Hietala: So in the world of security, it can be a little bit opaque, when you look at a given breach, as to what really happened, what failed, and so on. But enough information has come out about some of the breaches that you get some visibility into what went wrong.
Attend The Open Group Baltimore 2015
July 20-23, 2015
Early bird registration ends June 19
Of the two big insider breaches -- WikiLeaks and then Snowden -- in both cases, there were fairly fundamental security controls that should have been in place, or maybe were in place, but were poorly performed, that contributed to those -- access control type things, authorization, and so on.

Even in some of the large retailer credit card breaches, you can point to the fact that they didn’t do certain things right in terms of the basic blocking and tackling.

There's a whole lot of security technology out there, a whole lot of security controls that you can look to, but implementing the right ones for your situation, given the risk that you have and then operating them effectively, is an ongoing challenge for most companies.

Mezzapelle: Can I pose a question? It’s one of my premises that sometimes compliance and regulation makes companies do things in the wrong areas to the point where they have a less secure system. What do you think about that and how that impacts the blocking and tackling?

Hietala: That has probably been true for, say, the four years preceding this, but there was a study just recently -- I couldn’t tell you who it was from -- but it basically flipped that. For the last five years or so, compliance has always been at the top of the list of drivers for information security spend in projects and so forth, but it has dropped down considerably, because of all these high profile breaches. Senior executive teams are saying, "Okay, enough. I don’t care what the compliance regulations say, we're going to do the things we need to do to secure our environment." Nobody wants to be the next Sony.

Mezzapelle: Or the Target CEO who had to step down. Even though they were compliant, they still had a breach, which unfortunately, is probably an opportunity at almost every enterprise and agency that’s out there.

The right eyeballs


DeLong: And on the subject of open source, it’s frequently given as a justification or a benefit of open source that it will be more secure because there are millions of eyeballs looking at it. It's not millions of eyeballs, but the right eyeballs looking at it, the ones who can discern that there are security problems.

It's not necessarily the case that open source is going to be more secure, because it can be viewed by millions of eyeballs. You can have proprietary software that has just as much, or more, attention from the right eyeballs as open source.

Mezzapelle: There are also those million eyeballs out there trying to make money on exploiting it before it does get patched -- the new market economy.

Lounsbury: I was just going to mention that we're now seeing that some large companies are paying those millions of eyeballs to go look for vulnerabilities, strangely enough, which they always find in other people’s code, not their own.
It's not millions of eyeballs, but the right eyeballs looking at it, the ones who can discern that there are security problems.

Mezzapelle: Our Zero Day Initiative, that was part of the business model, is to pay people to find things that we can implement into our own products first, but it also made it available to other companies and vendors so that they could fix it before it became public knowledge.

Some of the economics are changing too. They're trying to get the white hatter, so to speak, to look at other parts that are maybe more critical, like what came up with Heartbleed.

Lounsbury: On that point, and I'm going to inject a question of my own if I may, on balance, is the open sharing of information of things like vulnerability analysis helping move us forward, and can we do more of it, or do we need to channel it in other ways?

Mezzapelle: We need to do more of it. It's beneficial. We still have conclaves of secretness saying that you can give this information to this group of people, but not this group of people, and it's very hard.

In my organization, which is global, I had to look at every last little detail to say, "Can I share it with someone who is a foreigner, or someone who is in my organization, but not in my organization?" It was really hard to try to figure out how we could use that information more effectively. If we can get it more automated to where it doesn't have to be the good old network talking to someone else, or an email, or something like that, it's more beneficial.

And it's not just the vulnerabilities. It's also looking more towards threat intelligence. You see a lot of investment, if you look at the details behind some of the investments in In-Q-Tel, for instance, about looking at data in a whole different way.

So we're emphasizing data, both in analytics as well as threat prediction, being able to know where some thing is going to come over the hill and you can secure your enterprise or your applications or systems more effectively against it.

Open sharing

Lounsbury: Let’s go down the row. Edna, what are your thoughts on more open sharing?

Conway: We need to do more of it, but we need to do it in a controlled environment.

We can get ahead of the curve with not just predictive analysis, but telemetry, to feed the predictive analysis, and that’s not going to happen because a government regulation mandates that we report somewhere.

So if you look, for example, DFARS, that came out last year with regard to concerns about counterfeit mitigation and detection in COTS ICT, the reality is not everybody is a member of GIDEP, and many of us actually share our information faster than it gets into GIDEP and more comprehensively.

I will go back to it’s rigor in the industry and sharing in a controlled environment.
There is a whole black market that has developed around those things, where nations are to some degree hoarding them, paying a lot of money to get them, to use them in cyberwar type activities.

Lounsbury: Jim, thoughts on open sharing?

Hietala: Good idea. It gets a little murky when you're looking at zero-day vulnerabilities. There is a whole black market that has developed around those things, where nations are to some degree hoarding them, paying a lot of money to get them, to use them in cyberwar type activities.

There's a great book out now called ‘Zero Day’ by Kim Zetter, a writer from Wired. It gets into the history of Stuxnet and how it was discovered, and Symantec, and I forget the other security researcher firm that found it. There were a number of zero-day vulnerabilities there that were used in an offensive cyberwar a capacity. So it’s definitely a gray area at this point.

DeLong: I agree with what Edna said about the parameters of the controlled environment, the controlled way in which it's done. Without naming any names, recently there were some feathers flying over a security research organization establishing some practices concerning a 60- or 90-day timeframe, in which they would notify a vendor of vulnerabilities, giving them an opportunity to issue a patch. In one instance recently, when that time expired and they released it, the vendor was rather upset because the patch had not been issued yet. So what are reasonable parameters of this controlled environment?

Supply chains

Lounsbury: Let’s move on here. Edna, one of the great quotes that came out of the early days of OTTF was that only God creates something from nothing and everybody else is on somebody’s supply chain. I love that quote.

But given that all IT components, or all IT products, are built from hardware and software components, which are sourced globally, what do we do to mitigate the specific risks resulting from malware and counterfeit parts being inserted in the supply chain? How do you make sure that the work to do that is reflected in creating preference for vendors who put that effort into it?

Conway: It's probably three-dimensional. The first part is understanding what your problem is. If you go back to what we heard Mary Ann Davidson talk about earlier today, the reality is what is the problem you're trying to solve?

I'll just use the Trusted Technology Provider Standard as an example of that. Narrowing down what the problem is, where the problem is located, helps you, number one.
We have a tendency to think about cyber in isolation from the physical, and the physical in isolation from the cyber, and then the logical.

Then, you have to attack it from all dimensions. We have a tendency to think about cyber in isolation from the physical, and the physical in isolation from the cyber, and then the logical. For those of us who live in OT or supply chain, we have to have processes that drive this. If those three don't converge and map together, we'll fail, because there will be gaps, inevitable gaps.

For me, it's identifying what your true problem is and then taking a three-dimensional approach to make sure that you always have security technology, the combination of the physical security, and then the logical processes to interlock and try to drive a mitigation scheme that will never reduce you to zero, but will identify things.

Particularly think about IoT in a manufacturing environment with the right sensor at the right time and telemetry around human behavior. All of a sudden, you're going to know things before they get to a stage in that supply chain or product lifecycle where they can become devastating in their scope of problem.

DeLong: As one data point, there was a lot of concern over chips fabricated in various parts of the world being used in national security systems. And in 2008, DARPA initiated a program called TRUST, which had a very challenging objective for coming up with methods by which these chips could be validated after manufacture.

Just as one example of the outcome of that, under the IRIS Program in 2010, SRI unveiled an infrared laser microscope that could examine the chips at the nanometer level, both for construction, functionality, and their likely lifetime -- how long they would last before they failed.

Lounsbury: Jim, Mary Ann, reactions?

Finding the real problem

Mezzapelle: The only other thing I wanted to add to Edna’s comment was reiteration about the economics of it and finding where the real problem is. Especially in the security area, information technology security, we tend to get so focused on trying to make it technically pure, avoiding the most 100 percent, ultimate risk. Sometimes, we forget to put our business ears on and think about what that really means for the business? Is it keeping them from innovating quickly, adapting to new markets, perhaps getting into a new global environment?

We have to make sure we look back at the business imperatives and make sure that we have metrics all along the road that help us make sure we are putting the investments in the right area, because security is really a risk balance, which I know Jim has a whole lot more to talk about.

Hietala: The one thing I would add to this conversation is that we have sort of been on a journey to where doing a better job of security is a good thing. The question is when is it going to become a differentiator for your product and service in the market. For me personally, a bank that really gets online banking and security right is a differentiator to me as a consumer.
Consumers -- and they surveyed consumers in 27 countries -- think that governments and businesses are not paying enough attention to digital security.

I saw a study that was quoted this week at the World Economic Forum that said that, by 2:1 margin, consumers -- and they surveyed consumers in 27 countries -- think that governments and businesses are not paying enough attention to digital security.

So maybe that’s a mindset shift that’s occurring as a result of how bad cybersecurity has been. Maybe we'll get to the point soon where it can be a differentiator for companies in the business-to-business context and a business-to-consumer context and so forth. So we can hope.

Conway: Great point. And just to pivot on that and point out how important it is. I know that what we are seeing now, and it’s a trend, and there are some cutting-edge folks who have been doing it for a while, but most boards of directors are looking at creating a digital advisory board for their company. They're recognizing the pervasiveness of digital risk as its own risk that sometimes it reports up to the audit committee.

I've seen at least 20 or 30 in the last three months come around, asking, did you advise every board members to focus on this from multiple disciplines? If we get that right, it might allow us that opportunity to share the information more broadly.

Lounsbury: That’s a really interesting point, the point about multiple disciplines. The next question is unfortunately the final question -- or fortunately, since it will get you to lunch. I am going to start off with Rance.

At some point, the difference between a security vulnerability failure or other kind of failures all flow into that big risk analysis that a digital-risk management regime would find out. One of the things that’s going on across the Real-Time and Embedded Systems Forum is to look at how we architect systems for higher levels of assurance, not just security vulnerabilities, but other kinds of failures as well.

The question I will ask here is, if a system fails its service-level agreement (SLA) for whatever reason, whether it’s security or some other kind of vulnerability, is that a result of our ability to do system architecture or software created without provably secure or provably assured components or the ability of the system to react to those kind of failures? If you believe that, how do we change it? How do we accelerate the adoption of better practices in order to mitigate the whole spectrum of risk of failure of the digital enterprise?

Emphasis on protection

DeLong: Well, in high assurance systems, obviously we still treat them as very important detection of problems when they occur, recovery from problems, but we put a greater emphasis on prevention, and we try to put greater effort into prevention.

You mentioned provably secure components, but provable security is only part of the picture. When you do prove, you prove a theorem, and in a reasonable system, a system of reasonable complexity, there isn’t just one theorem. There are tens, hundreds, or even thousands of theorems that are proved to establish certain properties in the system.

It has to do with proofs of the various parts, proofs of how the parts combine, what are the claims we want to make for the system, how do the proofs provide evidence that the claims are justified, and what kind of argumentation do we use based on that set of evidence.

So we're looking at not just the proofs as little gems, if you will. A proof of a theorem  think of it as a gemstone, but how are they all combined into creating a system?

If a movie star walked out on the red carpet with a little burlap sack around her neck full of a handful of gemstones, we wouldn’t be as impressed as we are when we see a beautiful necklace that’s been done by a real master, who has taken tens or hundreds of stones and combined them in a very pleasing and beautiful way.

And so we have to put as much attention, not just on the individual gemstones, which admittedly are created with very pure materials and under great pressure, but also how they are combined into a work that meets the purpose.

And so we have assurance cases, we have compositional reasoning, and other things that have to come into play. It’s not just about the provable components and it’s a mistake that is sometimes made to just focus on the proof.
Attend The Open Group Baltimore 2015
July 20-23, 2015
Early bird registration ends June 19
Remember, proof is really just a degree of demonstration, and we always want some demonstration to have confidence in the system, and proof is just an extreme degree of demonstration.

Mezzapelle: I think I would summarize it by embedding security early and often, and don’t depend on it 100 percent. That means you have to make your systems, your processes and your people resilient.

This has been a special BriefingsDirect presentation and panel discussion from The Open Group San Diego 2015. Download a copy of the transcript. This follows an earlier discussion on cybersecurity standards for safer supply chains. Another earlier discussion from the event focused on synergies among major Enterprise Architecture frameworks. And a presentation by John Zachman, founder of the Zachman Framework.

Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Tuesday, May 26, 2015

Big data helps Conservation International proactively respond to species threats in tropical forests

This latest BriefingsDirect big data innovation discussion examines how Conservation International (CI) in Arlington, Virginia uses new technology to pursue more data about what's going on in tropical forests and other ecosystems around the world.

As a non-profit, they have a goal of a sustainable planet, but we're going to learn how they've learned to measure what was once unmeasurable -- and then to share that data to promote change and improvement.


Listen to the podcast. Find it on iTunes. Read a full transcript. Download the transcript. Get the mobile app for iOS or Android.

To learn how big data helps manage environmental impact, BriefingsDirect sat down with Eric Fegraus, Director of Information Systems at Conservation International. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: First, tell us the relationship with technology. Conservation International recently announced HP Earth Insights. What is that all about?

Fegraus: HP Earth Insights is a partnership between Conservation International and HP and it's really about using technology to accelerate the work and impact of some of the programs within Conservation International. What we've been able to do is bring the analytics and a data-driven approach to build indices of wildlife communities in tropical forests and to be able to monitor them in near-real-time.

Fegraus
Gardner: I'm intrigued by this concept of being able to measure what was once unmeasurable. What do you mean by that?

Fegraus: This is really a telling line. We really don’t know what’s happening in tropical forests. We know some general things. We can use satellite imagery and see how forests are increasing or decreasing from year to year and from time period to time period. But we really don't know the finer scale measurements. We don't know what's happening within the forest or what animal species are increasing or are decreasing.

There's some technology that we have out in the field that we call camera traps, which take images or photos of the animals as they pass by. There are also some temperature sensors in them. Through that technology and some of the data analytics, we're able to actually evaluate and monitor those species over time.

Inference points

Gardner: One of the interesting concepts that we've seen is that for a certain quantity of data, let's say 10,000 data points, you can get magnitude of order more inference points. How does that work for you, Eric? Even though you're getting a lot of data, how does that translate into even larger insights?

Fegraus: We have some of the largest datasets in our field in terms of camera trapping data and wildlife communities. But within that, you also have to have a modeling approach to be able to utilize that data, use some of the best statistics, transform that into meaningful data products, and then have the IT infrastructure to be able to handle it and store it. Then, you need the data visualization tools to have those insights pop out at you.
Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition
Gardner: So, not only are you involved with HP in terms of the Earth Insights Project, but you're a consumer of HP technology. Tell us a little bit about Vertica and HP Haven, if that also is something you are involved with?

Fegraus: Yes. All of our servers are HP ProLiant servers. We've created an analytical space within our environment using the HP ProLiant servers, as well as HP Vertica. That's really the backbone of our analytical environment. We're also using R and we're now exploring with Distributed R within the Vertica context.

We’re using the HP Cloud for data storage and back up and we’re working on making the cloud a centerpiece for data exchange and analysis for wildlife monitoring. In terms of Haven, we're exploring other parts of Haven, in particular HP Autonomy, and a few other concepts, to help with unstructured data types.
What we want to do is get the best available data at the right spatial and temporal scales, the best science, and the right technology.

Gardner: Eric, let’s talk a little bit about what you get when you do good data analytics and how it changes the game in a lot of industries, not just conservation. I'm thinking about being able to project into people’s understanding of change.

So for someone to absorb an understanding that things need to happen in order for things to improve, there is a sense of convincing. What is big data bringing to the table for you when you go to governments or companies and try to promulgate change in these environments?

Fegraus: From our perspective, what we want to do is get the best available data at the right spatial and temporal scales, the best science, and the right technology. Then, when we package all this together, we can present unbiased information to decision makers, which can lead to hopefully good sustainable development and conservation decisions.

These decision makers can be public officials setting conservation policies or making land use decisions. They can be private companies seeking to value natural capital or assess the impacts of sourcing operations in sensitive ecosystems.

Of course, you never have control over which way legislation and regulations can go, but our goal is to bring that kind of factual information to the people that need it.

Astounding results

Gardner: And one of the interesting things for me is how people are using different data sets from areas that you wouldn't think would have any relationship to one another, but then when you join and analyze those datasets, you can come up with astounding results. Is this the case with you? Are you not only gathering your own datasets but finding the means to jibe that with other data and therefore come up with other levels of empirical analysis?

Fegraus: We are. A lot of the analysis today has been focused on the data that we've collected within our network. Obviously, there are a lot of other kinds of big data sets out there, for example, provided by governments and weather services, that are very relevant to what we're doing. We're looking at trying to utilize those data sets as best we can.
Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition
Of course, you also have to be careful. One of the key things we want to do is look for patterns, but we want to make sure that the patterns we're seeing, and the correlations we detect, all make sense within our scientific domain. You don’t want to create false correlations and improbable correlations.

Gardner: And among those correlations that you have been able to determine so far, about 12 percent of species are declining in the tropical forest. This information is thanks to your Tropical Ecology Assessment and Monitoring (TEAM) and HP Earth Insights. And there are many cases not yet perceived as being endangered. So maybe you could just share some of the findings, some of the outcome from all this activity.

Fegraus: We've actually worked up a paper, and that’s one of the insights. It’s telling, because species are ranked by “whether they are considered endangered or not.” So species that are considered “least concerned” according to the International Union for the Conservation of Nature (IUCN), we assume that they are doing okay.

So you wouldn’t expect to find that those species are actually declining. That can really serve as an early warning, a wake-up call, to protected-area managers and government officials in charge of those areas. There are actually some unexpected things happening here. The things that we thought were safe are not that safe.
Whether we are in the Amazon or whether we're in a forest in Asia or Indonesia, we can have results that are important locally

Gardner: And, for me, another telling indicator was that on an aggregate basis, some species are being measured and there isn’t any sense of danger or problem, but when you go localized, when you look at specific regions and ecosystems, you develop a different story. Was there an ability for your data gathering to give you more a tactical and insights that are specific?

Fegraus: That’s one of the really nice things about the TEAM Network, a partnership between Conservation International, the Wildlife Conservation Society and the Smithsonian Institution. In a lot of the work that TEAM does, we really work across the globe. Even though we're using the same methodologies, the same standards, whether we are in the Amazon or whether we're in a forest in Asia or Indonesia, we can have results that are important locally.

Then, as you aggregate them through sub-national level efforts, national-levels, or even continental levels, that's where we're trying to have the data flow up and down those spatial scales as needed.
Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition
For example, even though a particular species may be endangered worldwide we may find that locally, in a particular protected area, that species is stable. This provides important information to the protected area manager that the measures that are in place seem to be working for that species. It can really help in evaluating practices, measuring conservation goals and establishing smart policy.

Sense of confidence

Gardner: I've also spoken to some folks who express a sense of relief that they can go at whatever data they want and have a sense of confidence that they have systems and platforms that can handle the scale and the velocity of that data. It is sort of a freeing attitude that they don’t have to be concerned at the data level. They can go after the results and then determine the means to get the analysis that they need.

Is that something that you also share, that with your partnership with HP and with others, that this is about the determination of the analysis and the science, and you're not limited by some sort of speeds-and-feeds barrier?
The problem has really been bringing the technology, analytics, and tools to the programs that are mission critical, bringing all of this to business driven programs that are really doing the work.

Fegraus: This gets to a larger issue within the conservation community, the non-profits, and the environmental consulting firms. Traditionally, IT and technology has been all about keeping the lights on and making sure everyone has a laptop. There's a saying that people can share data, but the problem has really been bringing the technology, analytics, and tools to the programs that are mission critical, bringing all of this to business driven programs that are really doing the work.

One of the great outcomes of this is that we've pushed that technology to a program like TEAM and we're getting the cutting-edge technology that a program like TEAM needs into their hands, which has really changed the dynamic, compared to the status quo.

Gardner: So scale really isn't the issue any longer. It's now about your priorities and your requirements for the scientific activity?

Fegraus: Yes. It's making sure that technology meets the requirements in scientific and program objectives. And that's going to vary quite a bit depending on the program and the group that we were talking about, but ultimately it’s about enabling and accelerating the mission critical work of organizations like Conservation International.

Listen to the podcast. Find it on iTunes. Read a full transcript. Download the transcript. Get the mobile app for iOS or Android. Sponsor: HP.

You may also be interested in:

Thursday, May 21, 2015

Enterprises opting for converged infrastructure as stepping stone to hybrid cloud

In speaking with a lot of IT users, it has become clear to me that a large swath of the enterprise IT market – particularly the mid-market – falls in between two major technology trends.

The trends are server virtualization and hybrid cloud. IT buyers are in between – with one foot firmly into virtualization – but not yet willing to put the other foot down and commit to full cloud adoption.

IT organizations are well enamored of virtualization. They are so into the trend that many have more than 80 percent of their server workloads virtualized. They like hybrid cloud conceptually, but are by no means adopting it enterprise-wide. We’re talking less than 30 percent of all workloads for typical companies, and a lot of that is via shadow IT and software as a service (SaaS).

In effect, virtualization has spoiled IT. They have grown accustomed to what server virtualization can do for them – including reducing IT total costs – and they want more. But they do not necessarily want to wait for the payoffs by having to implement a lengthy and mysterious company-wide cloud strategy.
Respond to business needs faster
Simplify your IT infrastructure
Find out more about VSPEX BLUE
They want to modernize and simplify how they support existing applications. They want those virtualization benefits to extend to storage, backup and recovery, and be ready to implement and consume some cloud services. They want the benefits of software-defined data centers (SDDC), but they don’t want to invest huge amounts of time, money, and risk in a horizontal, pan-IT modernization approach. And they're not sure how they'll support their new, generation 3 apps. At least not yet.

So while IT and business leaders both like the vision and logic of hybrid cloud, they have a hard time convincing all IT consumers across their enterprise to standardize deployment of existing generation 2 workloads that span private and public cloud offerings.

But they're not sitting on their hands, waiting for an all-encompassing cloud solution miracle covered in pixie dust, being towed into town by a unicorn, either.

Benefits first, strategy second

I've long been an advocate of cloud models, and I fully expect hybrid cloud architectures to become dominant. Practically, however, IT leaders are right now less inclined to wait for the promised benefits of hybrid cloud. They want many of the major attributes of what the cloud models offer – common management, fewer entities to procure IT from, simplicity and speed of deployment, flexibility, automation and increased integration across apps, storage, and networking. They want those, but they're not willing to wait for a pan-enterprise hybrid cloud solution that would involve a commitment to a top-down cloud dictate.

Instead, we’re seeing an organic, bottom-up adoption of modern IT infrastructure in the form of islands of hyper-converged infrastructure appliances (HCIA). By making what amounts to mini-clouds based on the workloads and use cases, IT can quickly deliver the benefits of modern IT architectures without biting off the whole cloud model.

If the hyper-scale data centers that power the likes of Google, Amazon, Facebook, and Microsoft are the generation 3 apps architectures of the future, the path those organizations took is not the path an enterprise can – or should – take.

Your typical Fortune 2000 enterprise is not going to build a $3 billion state-of-the-art data center, designed from soup to nuts to support their specific existing apps, and then place all their IT eggs into that one data center basket. It just doesn’t work that way.
Your typical Fortune 2000 enterprise is not going to build a $3 billion state-of-the-art data center, designed from soup to nuts to support their specific existing apps, and then place all their IT eggs into that one data center basket.

There are remote offices with unique requirements to support, users that form power blocks around certain applications, bean counters that won’t commit big dollars. In a word, there are “political” issues that favor a stepping-stone approach to IT infrastructure modernization. Few IT organizations can just tell everyone else how they will do IT.

The constraints of such IT buyers must be considered as we try to predict cloud adoption patterns over the next few years. For example, I recently chatted with IT leaders in the public sector, at the California Department of Water Resources. They show that what drives their buying is as much about what they don’t have as what they do.

"Our procurement is much harder. Getting people to hire is much harder. We live within a lot of constraints that the private sector doesn’t realize. We have a hard time adjusting our work levels. Can we get more people now? No. It takes forever to get more people, if you can ever get them,” said Tony Morshed, Chief Technology Officer for the California Resources Data Center.

“We’re constantly doing more with less. Part of this virtualization is survivability. We would never be able to survive or give our business the tools they need to do their business without it. We would just be a sinking ship,” he said. “[Converged infrastructure like VMware’s] EVO:RAIL looks pretty nice. I see it as something that we might be able to use for some of our outlying offices, where we have around 100 to 150 people.

"We can drop something like that in, put virtual desktop infrastructure (VDI) on it, and deliver VDI services to them locally, so they don't have to worry about that traffic going over the wide area network (WAN).” [Disclosure: VMware is a sponsor of my BriefingsDirect podcasts].

The California Department of Water Resources has deployed VDI for 800 desktops. Not only is it helping them save money, it’s also used as a strategy for a remote access. They're in between virtualization and cloud, but they're heralding the less-noticed trend of tactical modernization through hyper-converged infrastructure appliances.

Indeed, VDI deployments that support as many as 250 desktops on a single VSPEX BLUE appliance at a remote office or agency, for example, allow for ease in administration and deployment on a small footprint while keeping costs clear and predictable. And, if the enterprise wants to scale up and out to hybrid cloud, they can do so with ease and low risk.

Stepping stone to cloud

At Columbia Sportswear, there is a similar mentality, of moving to cloud gradually while seeking the best of agile, on-premises efficiency and agility.

"With our business changing and growing as quickly as it is, and with us doing business and selling directly to consumers in over a hundred countries around the world, our data centers have to be adaptable. Our data and our applications have to be secure and available, no matter where we are in the world, whether you're on network or off-premises,” said Tim Melvin, Director of Global Technology Infrastructure at Columbia Sportswear.

"The software-defined data center has been a game-changer for us. It’s allowed us to take those technologies, host them where we need them, and with whatever cost configuration makes sense, whether it’s in the cloud or on-premises, and deliver the solutions that our business needs,” he said.
Respond to business needs faster
Simplify your IT infrastructure
Find out more about VSPEX BLUE
Added Melvin: "When you look at infrastructure and the choice between on-premise solutions, hybrid clouds, public and private clouds, I don't think it's a choice necessarily of which answer you choose. There isn't one right answer. What’s important for infrastructure professionals is to understand the whole portfolio and understand where to apply your high-power, on-premises equipment and where to use your lower-cost public cloud, because there are trade-offs in each case."

Columbia strives to present the correct tool for the correct job. For instance, they have completely virtualized their SAP environment to run on on-premises equipment. For .software development, they use a public cloud.

And so the stepping stone to cloud flexibility: To be able to run on-premise workloads like enterprise resource planning (ERP) and VDI with speed, agility, and low-cost. And to do so in such a way that some day those workloads could migrate to a public cloud, when that makes sense.

"The closer we get to a complete software-defined infrastructure, the more flexibility and power we have to remove the manual components, the things that we all do a little differently and we can't do consistently. We have a chance to automate more. We have the chance to provide integrations into other tools, which is actually a big part of why we chose VMware as our platform. They allow such open integration with partners that, as we start to move our workloads more actively into the cloud, we know that we won't get stuck with a particular product or a particular configuration,” said Melvin.

"The openness will allow us to adapt and change, and that’s just something you don't get with hardware. If it's software-defined, it means that you can control it and you can morph your infrastructure in order to meet your needs, rather than needing to re-buy every time something changes with the business,” he said.

SDDC-in-a-box

What we're seeing now are more tactical implementations of the best of what cloud models and hyper-scale data center architectures can provide. And we’re seeing these deployments on a use-case basis, like VDI, rather than a centralized IT mandate across all apps and IT resources. These deployments are so tactical that they consist in many cases of a single “box” – an appliance that provides the best of hyper scale and simplicity of virtualization with the cost benefits and deployment ease of a converged infrastructure appliance.

This tactical approach is working because blocks of users and/or business units (or locations) can be satisfied, IT can gain efficiency and retain control, and these implementations can eventually become part of the pan-IT hybrid cloud strategy. Mid-market companies like this model because it means the hyper-converged appliance box is the data center, it can scale down to their needs affordably – not box them in when the time comes to expand – or to move to a hybrid cloud model later.
What we're seeing now are more tactical implementations of the best of what cloud models and hyper-scale data center architectures can provide.

What newly enables this appealing stepping-stone approach to the hybrid cloud end-game? It’s the principles of SDDC – but without the data center. It’s using virtualization services to augment storage and back-up and disaster recovery (DR) without adopting an entire hybrid cloud model.

The numbers speak to the preferences of IT to adopt these new IT architectures in this fashion. According to IDC, the converged infrastructure segment of the IT market will expand to $17.8 billion in 2016 from $1.4 billion in 2013.


VSPEX BLUE is EVO:RAIL
plus EMC’s Management Products


A recent example of these HCIA parts coming together to serve the tactical apps support strategy and segue to the cloud is the EMC VSPEX BLUE appliance, which demonstrates a new degree to which total convergence can be taken.

The Intel x-86 Xeon off-the-shelf hardware went on sale in February, and is powered by VMware EVO:RAIL and EMC’s VSPEX BLUE Manager, an integrated management layer that brings entirely new levels of simplicity and deployment ease.

This bundle of capabilities extends the capabilities of EVO into a much larger market, and provides the stepping stone to hyper convergence across mid-market IT shops, and within departments or remote offices for larger enterprises. The VSPEX BLUE manager integrates seamlessly into EVO:RAIL, leveraging the same design principles and UI characteristics as EMC is known for.

What’s more, because EVO:RAIL does not restrict integrations, it can be easily extended via the native element manager. The notion of hyper-converged becomes particularly powerful when it’s not a closed system, but rather an extremely powerful set of components that adjust to many environments and infrastructure requirements.

VSPEX BLUE is based on VMware's EVO:RAIL platform, a software-only appliance platform that supports VMware vSphere hypervisors. By integrating all the elements, the HCIA offers the simplicity of virtualization with the power of commodity hardware and cloud services. EMC and VMware have apparently done a lot of mutual work to up the value-add to the COTS hardware, however.

The capabilities of VSPEX BLUE bring much more than a best-of-breed model alone; there is total costs predictability, simplicity of deployment and simplified means to expansion. This, for me, is where the software element of hyper-converged infrastructure is so powerful, while the costs are far below proprietary infrastructure systems, and the speed-to-value in actual use is rapid.
Respond to business needs faster
Simplify your IT infrastructure
Find out more about VSPEX BLUE
For example, VSPEX BLUE can be switched on and begin provisioning virtual machines in less than 15 minutes, says EMC. Plus, EMC integrates its management software to EMC Secure Remote Support, which allows remote system monitoring by EMC to detect and remedy failures before they emerge. So add in the best of cloud services to the infrastructure support mix.

Last but not least, the new VSPEX BLUE Market is akin to an “app store” and is populated with access to products and 24x7 support from a single vendor, EMC. This consumer-like experience of a context-appropriate procurement apparatus for appliances in the cloud is unique at this deep infrastructure level. It forms a responsive and well-populated marketplace for the validated products and services that admins need, and creates a powerful ecosystem for EMC and VMWare partners.

EMC and VMware seem to recognize that the market wants to take proven steps, not blind leaps. The mid-market wants to solve their unique problems. To start, VSPEX BLUE offers just three applications: EMC CloudArray Gateway, which helps turn public cloud storage into an extra tier of capacity; EMC RecoverPoint for Virtual Machines, which protects against application outages; and VMware vSphere Data Protection Advanced, which provides disk-based backup and recovery.

Future offerings may include applications such as virus-scanning tools or software for purchasing capacity from public cloud services, and they may come from third parties, but will be validated by EMC.

The way in which these HCIA instances are providing enterprises and mid-market organizations the means to adapt to cloud at their pace, with ease and simplicity, and to begin to exploit public cloud services that support on-premises workloads and reliability and security features, shows that the vendors are waking up. The best of virtualization and the best of hardware integration are creating the preferred on-ramps to the cloud.

Disclosure: VMware is a sponsor of BriefingsDirect podcasts that I host and moderate. EMC paid for travel and lodging for a recent trip I made to EMCWorld.

You may also be interested in: