How can global enterprise cybersecurity be improved for better enterprise integrity and risk mitigation? What constitutes a good standard, or set of standards, to help? And how can organizations work to better detect misdeeds, rather than have attackers on their networks for months before being 
discovered?
These questions were addressed during a February panel discussion at 
The Open Group San Diego 2015 conference. Led by moderator 
Dave Lounsbury, Chief Technology Officer, The Open Group, the speakers included 
Edna Conway, Chief Security Officer for Global Supply Chain, Cisco; 
Mary Ann Mezzapelle, Americas CTO for Enterprise Security Services, HP; 
Jim Hietala, Vice President of Security for The Open Group, and 
Rance DeLong, Researcher into Security and High Assurance Systems, Santa Clara University.
Here are some excerpts: 
Dave Lounsbury: We've heard about the security, 
cybersecurity landscape, and, of course, everyone knows about all the 
many recent breaches. Obviously, 
the challenge is growing in cybersecurity. So, I want to start asking a 
few questions, directing the first one to Edna 
Conway. 
We've heard about the 
Verizon Data Breach Investigation of DBIR
 report that catalogs the various attacks that have been made over the 
past year. One of the interesting findings was that in some of these 
breaches, the attackers were on the networks for months before being 
discovered. 
What do we need to start doing differently to secure our enterprises?
Edna Conway:
 There are a couple of things. From my perspective, continuous 
monitoring is absolutely essential. People don't like it because it 
requires rigor, consistency, and process. The real question is, what do 
you continuously monitor? 
It’s what you monitor that makes a difference. 
Access control
 and authentication, should absolutely be on our radar screen, but I 
think the real ticket is behavior. What kind of behavior do you see 
authorized personnel engaging in that should send up as an alert? That’s
 a trend that we need to embrace more. 
The second thing that we need to do differently is 
drive detection and containment. I think we try to do that, but we need 
to become more rigorous in it. Some of that rigor is around things like,
 are we actually doing advanced malware protection, rather than just 
detection? 
What are we doing specifically around 
threat analytics and the feeds that come to us: how we absorb them, how 
we mine them, and how we consolidate them? 
The third thing for me is how we get it right. I call that team the puzzle solvers. How do we get them together swiftly? 
How
 do you put the right group of experts together when you see a behavior 
aberration or you get a threat feed that says that you need to address 
this now? When we see a threat injection, are we actually acting on the 
anomaly before it makes its way further along in the cycle?
Executive support
Mary Ann Mezzapelle:
 Another thing that I'd like to add is making sure you have the 
executive support and processes in place. If you think how many plans 
and tests and other things that organizations have gone through for 
business continuity and recovery, you have to think about that incident 
response. We talked earlier about how to get the 
C suite
 involved. We need to have that executive sponsorship and understanding,
 and that means it's connected to all the other parts of the enterprise.
 
So it might be the communications, it might be legal,
 it might be other things, but knowing how to do that and being able to 
respond to it quickly is also very important.
Rance DeLong:
 I agree on the monitoring being very important as well as the question 
of what to monitor. There are advances being made through research in 
this area, both modeling behavior -- what are the nominal behaviors -- 
and how we can allow for certain variations in the behavior and still 
not have too many false positives or too many false negatives.
Also
 on a technical level, we can analyze systems for certain invariants, 
and these can be very subtle and complicated invariance formulas that 
may be pages long and hold on the system during its normal operation. A 
monitor can be monitoring both for invariance, these static things, but 
they can also be monitoring for changes that are supposed to occur and 
whether those are occurring the way they're supposed to. 
Jim Hietala:
 The only thing I would add is that I think it’s about understanding 
where you really have risk and being able to measure how much risk is 
present in your given situation. 
In the security industry, there has been a shift in 
mindset away from figuring that we can actually prevent every bad thing 
from happening towards really understanding where people may have gotten
 into the system. What are those markers that something is gone awry and
 reacting to that in a more timely way -- so detective controls, as 
opposed to purely preventative type controls.
Lounsbury:
 We heard from 
Dawn Meyerriecks earlier about the convergence of virtual
 and physical and how that changes the risk management game. And we 
heard from Mary Ann Davidson about how she is definitely not going to 
connect her house to the Internet. 
So this brings new potential risks and security management concerns. What do you see as the big 
Internet of Things (IoT) security concerns and how does the technology industry assess and respond to those? 
Hietala:
 In terms of IoT, the thing that concern me is that many of the things 
that we've solved at some level in IT hardware, software, and systems 
seemed to have been forgotten by many of the IoT device manufacturers. 
We have pretty well thought out processes for how we 
identify assets, we patch things, and we deal with security events and 
vulnerabilities that happen. The idea that, particularly on the consumer
 class of IoT type devices, we have devices out there with 
IP
 interfaces on them, and many of the manufacturers just haven’t had a 
thought of how they are going to patch something in the field, I think 
should scare us all to some degree.
Maybe it is, as 
Mary Ann mentioned, the idea that there are certain systemic risks that 
are out there that we just have to sort of nod our head and say that 
that’s the way it is. But certainly around really critical kinds of IoT 
applications, we need to take what we've learned in the last ten years 
and apply it to this new class of devices.
New architectural approach
DeLong:
 I'd like to add to that. We need a new architectural approach for IoT 
that will help to mitigate the systemic risks. And echoing the concerns 
expressed by Mary Ann a few minutes ago, in 2014, 
Europol,
 which is an organization that tracks criminal  risks of various kinds, 
predicted by the end of 2014, murder by Internet, in the context of 
Internet of Things. It didn't happen, but they predicted it, and I think
 it's not farfetched that we may see it over time.
 Lounsbury:
Lounsbury: What do we really know actually? Edna, do you have any reaction on that one? 
Conway:
 Murder by Internet. That’s the question you gave me, thanks. Welcome to
 being a former prosecutor. The answer is on their derrieres. The 
reality is do we have any evidentiary reality to be able to prove that? 
I
 think the challenge is one that's really well-taken, which is we are 
probably all in agreement on, the convergence of these devices. We saw 
the convergence of IT and OT and we haven't fixed that yet. 
We
 are now moving with IoT into a scalability of the nature and volume of 
devices. To me, the real challenge will be to come up with new ways of 
deploying telemetry to allow us to see all the little crevices and 
corners of the Internet of Things, so that we can identify risks in the 
same way that we have. We haven't mastered 100 percent, but we've 
certainly tackled predominately across the computer networks and the 
network itself and IT. We're just not there with IoT.
Mezzapelle:
 Edna, it also brings to mind another thing -- we need to take advantage
 of the technology itself. So as the data gets democratized, meaning 
it's going to be everywhere -- the velocity, volume, and so forth -- we 
need to make sure that those devices can maybe be self-defendable, or 
maybe they can join together and defend themselves against other things.
The real challenge will be to come up with new ways of deploying 
telemetry to allow us to see all the little crevices and corners of the 
Internet of Things.
So we can't just apply the 
old-world thinking of being able to know everything and control 
everything, but to embed some of those kinds of characteristics in the 
systems, devices, and sensors themselves.
Lounsbury:
 We've heard about the need. In fact, Ron Ross mentioned the need for 
increased public-private cooperation to address the cybersecurity 
threat. Ron, I would urge you to think about including voluntary 
consensus standards organizations in that essential partnership you 
mentioned to make sure that you get that high level of engagement, but 
of course, this is a broad concern to everybody. 
President
 Obama has made a call for legislation on enabling cybersecurity and 
information sharing, and one of the points within that was shaping a 
cyber savvy workforce and many other parts of public-private information
 sharing.
So what more can be done to enable effective 
public-private cooperation on this and what steps can we, as a consensus
 organization, take to actually help make that happen? Mary Ann, do you 
want to tackle that one and see where it goes? 
Collaboration is important
Mezzapelle:
 To your point, collaboration is important and it's not just about the 
public and the private partnership. It also means within an industry 
sector or in your supply chain and third-party. It's not just about the 
technology; it's also about the processes, and being able to communicate
 effectively, almost at machine speed, in those areas.
So
 you think about the people, the processes, and the technology, I don't 
think it's going to be solved by government. I think I agree with the 
previous speakers when they were talking about how it needs to be more 
hand-in-hand. 
There are some ways that industry can actually lead that. We have some examples, for instance what we are doing with the 
Healthcare Forum and with the 
Mining and Minerals Forum.
 That might seem like a little bit, but it's that little bit that helps,
 that brings it together to make it easier for that connection.
It's
 also important to think about, especially with the class of services 
and products that are available as a service, another measure of 
collaboration. Maybe you, as a security organization, determine that 
your capabilities can't keep up with the bad guys, because  they have 
more money, more time, more opportunity to take advantage, either from a
 financial perspective or maybe even from a competitive perspective, for
 your intellectual property.
You need those product vendors or you might need a services vendor to 
really be able to fill in the gaps, so that you can have that kind of 
thing on demand.
You really can't do it yourself.
 You need those product vendors or you might need a services vendor to 
really be able to fill in the gaps, so that you can have that kind of 
thing on demand. So I would encourage you to think about that kind of 
collaboration through partnerships in your whole ecosystem.
DeLong:
 I know that people in the commercial world don't like a lot of 
regulation, but I think government can provide certain minimal standards
 that must be met to raise the floor. Not that companies won't exceed 
these and use that as a competitive basis, but if minimum is set in 
regulations, then this will raise the whole level of discourse.
Conway:
 We could probably debate over a really big bottle of wine whether it's 
regulation or whether it's collaboration. I agree with Mary Ann. I think
 we need to sit down and ask what are the biggest challenges that we 
have and take bold, hairy steps to pull together as an industry? And 
that includes government and academia as partners. 
But
 I will give you just one example: ECIDs. They are out there and some 
are on semiconductor devices. There are some semiconductor companies 
that already use them, and there are some that don't.
A
 simple concept would be if we could make sure that those were actually 
published on an access control base, so that we could go and see whether
 the ECID was actually utilized, number one. 
Speeding up standards
Lounsbury:
 Okay, thanks. Jim, I think this next question is about standards 
evolution. So we're going to send it to someone from a standards 
organization. 
The cyber security threat evolves 
quickly, and protection mechanisms evolve along with them. It's the old 
attacker-defender arms race. Standards take time to develop, 
particularly if you use a consensus process. How do we change the 
dynamic? How do we make sure that the standards are keeping up with the 
evolving threat picture? And what more can be done to speed that up and 
keep it fresh?
Hietala: I'll go back to a series
 of workshops that we did in the fall around the topic of security 
automation. In terms of The Open Group's perspective, standards 
development works best when you have a strong customer voice expressed 
around the pain points, requirements, and issues. 
We 
did a series of workshops on the topic of security automation with 
customer organizations. We had maybe a couple of hundred inputs over the
 course of four workshops, three physical events, and one that we did on
 the web. We collected that data, and then are bringing it to the 
vendors and putting some context around a really critical area, which is
 how do you automate some of the security capabilities so that you are 
responding faster to attacks and threats.
Standards development works best when you have a strong customer voice 
expressed around the pain points, requirements, and issues. 
Generally,
 with just the idea that we bring customers into the discussion early, 
we make sure that their issues are well-understood. That helps motivate 
the vendor community to get serious about doing things more quickly. 
One
 of the things we heard pretty clearly in terms of requirements was that
 multi-vendor interoperability between security components is pretty 
critical in that world. It's a multi-vendor world that most of the 
customers are living with. So building interfaces that are open, where 
you have got interoperability between vendors, is a really key thing.
DeLong:
 It's a really challenging problem, because in emerging technologies, 
where you want to encourage and you depend upon innovation, it's hard to
 establish a standard. It's still emerging. You don't know what's going 
to be a good standard. So you hold off and you wait and then you start 
to get innovation, you get divergence, and then bringing it back 
together ultimately takes more energy.
Lounsbury:
 Rance, since you have got the microphone, how much of the current 
cybersecurity situation is attributed to poor blocking and tackling in 
terms of the basics, like doing security architecture or even having a 
method to do security architecture, things like risk management, which 
of course Jim and the Security Forum have been looking into? And not 
only that, what about translating that theory into operational practice 
and making sure that people are doing it on a regular basis?
DeLong:
 A report I read on SANs, a US Government issued report on January 28 of
 this year, said that that many, or most, or all of our critical weapons
 systems contain flaws and vulnerabilities. One of the main conclusions 
was that, in many cases, it was due to not taking care of the basics -- 
the proper administration of systems, the proper application of repairs,
 patches, vulnerability fixes, and so on. So we need to be able to do it
 in critical systems as well as on desktops.
Open-source crisis
Mezzapelle:
 You might consider the open-source code crisis that happened over the 
past year with Heartbleed, where the benefits of having open-source code
 is somewhat offset by the disadvantages. 
That may be 
one of the areas where the basics need to be looked at. It’s also 
because those systems were created in an environment when the threats 
were at an entirely different level. That’s a reminder that we need to 
look to that in our own organization.
Another thing is 
in mobile applications, where we have such a rush to get out features, 
revs, and everything like that, that it’s not entirety embedded in the 
system’s lifecycle or in a new startup company. Those are the some of 
the other basic areas where we find that the basics, the foundation, 
needs to be solidified to really help enhance the security in those 
areas.
Hietala: So in the world of security, it 
can be a little bit opaque, when you look at a given breach, as to what 
really happened, what failed, and so on. But enough information has come
 out about some of the breaches that you get some visibility into what 
went wrong. 
Of the two big insider breaches -- 
WikiLeaks
 and then Snowden -- in both cases, there were fairly fundamental 
security controls that should have been in place, or maybe were in 
place, but were poorly performed, that contributed to those -- access 
control type things, authorization, and so on.
Even in some of 
the large retailer credit card breaches, you can point to the fact that 
they didn’t do certain things right in terms of the basic blocking and 
tackling.
There's a whole lot of security technology 
out there, a whole lot of security controls that you can look to, but 
implementing the right ones for your situation, given the risk that you 
have and then operating them effectively, is an ongoing challenge for 
most companies.
Mezzapelle: Can I pose a 
question? It’s one of my premises that sometimes compliance and 
regulation makes companies do things in the wrong areas to the point 
where they have a less secure system. What do you think about that and 
how that impacts the blocking and tackling?
Hietala:
 That has probably been true for, say, the four years preceding this, 
but there was a study just recently -- I couldn’t tell you who it was 
from -- but it basically flipped that. For the last five years or so, 
compliance has always been at the top of the list of drivers for 
information security spend in projects and so forth, but it has dropped 
down considerably, because of all these high profile breaches. Senior 
executive teams are saying, "Okay, enough. I don’t care what the 
compliance regulations say, we're going to do the things we need to do 
to secure our environment." Nobody wants to be the next 
Sony. 
Mezzapelle: Or the 
Target CEO
 who had to step down. Even though they were compliant, they still had a
 breach, which unfortunately, is probably an opportunity at almost every
 enterprise and agency that’s out there.
The right eyeballs
DeLong: And on the subject of open source, 
it’s frequently given as a justification or a benefit of open source 
that it will be more secure because there are millions of eyeballs 
looking at it. It's not millions of eyeballs, but the right eyeballs 
looking at it, the ones who can discern that there are security 
problems. 
It's not necessarily the case that open 
source is going to be more secure, because it can be viewed by millions 
of eyeballs. You can have proprietary software that has just as much, or
 more, attention from the right eyeballs as open source.
Mezzapelle:
 There are also those million eyeballs out there trying to make money on
 exploiting it before it does get patched -- the new market economy.
Lounsbury:
 I was just going to mention that we're now seeing that some large 
companies are paying those millions of eyeballs to go look for 
vulnerabilities, strangely enough, which they always find in other 
people’s code, not their own.
It's not millions of eyeballs, but the right eyeballs looking at it, the ones who can discern that there are security problems. 
Mezzapelle: Our 
Zero Day
 Initiative, that was part of the business model, is to pay people to 
find things that we can implement into our own products first, but it 
also made it available to other companies and vendors so that they could
 fix it before it became public knowledge. 
Some of the
 economics are changing too. They're trying to get the white hatter, so 
to speak, to look at other parts that are maybe more critical, like what
 came up with 
Heartbleed.
Lounsbury:
 On that point, and I'm going to inject a question of my own if I may, 
on balance, is the open sharing of information of things like 
vulnerability analysis helping move us forward, and can we do more of 
it, or do we need to channel it in other ways?
Mezzapelle:
 We need to do more of it. It's beneficial. We still have conclaves of 
secretness saying that you can give this information to this group of 
people, but not this group of people, and it's very hard. 
In
 my organization, which is global, I had to look at every last little 
detail to say, "Can I share it with someone who is a foreigner, or 
someone who is in my organization, but not in my organization?" It was 
really hard to try to figure out how we could use that information more 
effectively. If we can get it more automated to where it doesn't have to
 be the good old network talking to someone else, or an email, or 
something like that, it's more beneficial.
And it's not
 just the vulnerabilities. It's also looking more towards threat 
intelligence. You see a lot of investment, if you look at the details 
behind some of the investments in 
In-Q-Tel, for instance, about looking at data in a whole different way. 
So
 we're emphasizing data, both in analytics as well as threat prediction,
 being able to know where some thing is going to come over the hill and 
you can secure your enterprise or your applications or systems more 
effectively against it.
Open sharing
Lounsbury: Let’s go down the row. Edna, what are your thoughts on more open sharing?
Conway: We need to do more of it, but we need to do it in a controlled environment. 
We
 can get ahead of the curve with not just predictive analysis, but 
telemetry, to feed the predictive analysis, and that’s not going to 
happen because a government regulation mandates that we report 
somewhere. 
So if you look, for example, 
DFARS, that came out last year with regard to concerns about counterfeit mitigation and detection in 
COTS ICT, the reality is not everybody is a member of 
GIDEP, and many of us actually share our information faster than it gets into GIDEP and more comprehensively. 
I will go back to it’s rigor in the industry and sharing in a controlled environment.
There is a whole black market that has developed around those things, 
where nations are to some degree hoarding them, paying a lot of money to
 get them, to use them in cyberwar type activities.
Lounsbury: Jim, thoughts on open sharing?
Hietala:
 Good idea. It gets a little murky when you're looking at zero-day 
vulnerabilities. There is a whole black market that has developed around
 those things, where nations are to some degree hoarding them, paying a 
lot of money to get them, to use them in cyberwar type activities.
There's a great book out now called ‘
Zero Day’ by 
Kim Zetter, a writer from Wired. It gets into the history of 
Stuxnet and how it was discovered, and 
Symantec,
 and I forget the other security researcher firm that found it. There 
were a number of zero-day vulnerabilities there that were used in an 
offensive cyberwar a capacity. So it’s definitely a gray area at this 
point.
DeLong: I agree with what Edna said about
 the parameters of the controlled environment, the controlled way in 
which it's done. Without naming any names, recently there were some 
feathers flying over a security research organization establishing some 
practices concerning a 60- or 90-day timeframe, in which they would 
notify a vendor of vulnerabilities, giving them an opportunity to issue a
 patch. In one instance recently, when that time expired and they 
released it, the vendor was rather upset because the patch had not been 
issued yet. So what are reasonable parameters of this controlled 
environment?
Supply chains
Lounsbury: Let’s move on here. Edna, one of the great quotes that came out of the early days of 
OTTF was that only God creates something from nothing and everybody else is on somebody’s supply chain. I love that quote. 
But
 given that all IT components, or all IT products, are built from 
hardware and software components, which are sourced globally, what do we
 do to mitigate the specific risks resulting from malware and 
counterfeit parts being inserted in the supply chain? How do you make 
sure that the work to do that is reflected in creating preference for 
vendors who put that effort into it?
Conway: 
It's probably three-dimensional. The first part is understanding what 
your problem is. If you go back to what we heard Mary Ann Davidson talk 
about earlier today, the reality is what is the problem you're trying to
 solve?
I'll just use the 
Trusted Technology Provider Standard as an example of that. Narrowing down what the problem is, where the problem is located, helps you, number one.
We have a tendency to think about cyber in isolation from the physical, 
and the physical in isolation from the cyber, and then the logical.
Then,
 you have to attack it from all dimensions. We have a tendency to think 
about cyber in isolation from the physical, and the physical in 
isolation from the cyber, and then the logical. For those of us who live
 in OT or supply chain, we have to have processes that drive this. If 
those three don't converge and map together, we'll fail, because there 
will be gaps, inevitable gaps. 
For me, it's 
identifying what your true problem is and then taking a 
three-dimensional approach to make sure that you always have security 
technology, the combination of the physical security, and then the 
logical processes to interlock and try to drive a mitigation scheme that
 will never reduce you to zero, but will identify things. 
Particularly
 think about IoT in a manufacturing environment with the right sensor at
 the right time and telemetry around human behavior. All of a sudden, 
you're going to know things before they get to a stage in that supply 
chain or product lifecycle where they can become devastating in their 
scope of problem.
DeLong: As one data point, 
there was a lot of concern over chips fabricated in various parts of the
 world being used in national security systems. And in 2008, 
DARPA initiated a program called 
TRUST, which had a very challenging objective for coming up with methods by which these chips could be validated after manufacture. 
Just as one example of the outcome of that, under the 
IRIS Program
 in 2010, SRI unveiled an infrared laser microscope that could examine 
the chips at the nanometer level, both for construction, functionality, 
and their likely lifetime -- how long they would last before they 
failed.
Lounsbury: Jim, Mary Ann, reactions?
Finding the real problem
Mezzapelle:
 The only other thing I wanted to add to Edna’s comment was reiteration 
about the economics of it and finding where the real problem is. 
Especially in the security area, information technology security, we 
tend to get so focused on trying to make it technically pure, avoiding 
the most 100 percent, ultimate risk. Sometimes, we forget to put our 
business ears on and think about what that really means for the 
business? Is it keeping them from innovating quickly, adapting to new 
markets, perhaps getting into a new global environment? 
We
 have to make sure we look back at the business imperatives and make 
sure that we have metrics all along the road that help us make sure we 
are putting the investments in the right area, because security is 
really a risk balance, which I know Jim has a whole lot more to talk 
about.
Hietala: The one thing I would add to 
this conversation is that we have sort of been on a journey to where 
doing a better job of security is a good thing. The question is when is 
it going to become a differentiator for your product and service in the 
market. For me personally, a bank that really gets online banking and 
security right is a differentiator to me as a consumer.
Consumers -- and they surveyed consumers in 27 countries -- think that 
governments and businesses are not paying enough attention to digital 
security. 
I saw a study that was quoted this 
week at the World Economic Forum that said that, by 2:1 margin, 
consumers -- and they surveyed consumers in 27 countries -- think that 
governments and businesses are not paying enough attention to digital 
security. 
So maybe that’s a mindset shift that’s 
occurring as a result of how bad cybersecurity has been. Maybe we'll get
 to the point soon where it can be a differentiator for companies in the
 business-to-business context and a business-to-consumer context and so 
forth. So we can hope.
Conway: Great point. And 
just to pivot on that and point out how important it is. I know that 
what we are seeing now, and it’s a trend, and there are some 
cutting-edge folks who have been doing it for a while, but most boards 
of directors are looking at creating a digital advisory board for their 
company. They're recognizing the pervasiveness of digital risk as its 
own risk that sometimes it reports up to the audit committee. 
I've
 seen at least 20 or 30 in the last three months come around, asking, 
did you advise every board members to focus on this from multiple 
disciplines? If we get that right, it might allow us that opportunity to
 share the information more broadly.
Lounsbury: 
That’s a really interesting point, the point about multiple disciplines.
 The next question is unfortunately the final question -- or 
fortunately, since it will get you to lunch. I am going to start off 
with Rance.
At some point, the difference between a 
security vulnerability failure or other kind of failures all flow into 
that big risk analysis that a digital-risk management regime would find 
out. One of the things that’s going on across the 
Real-Time and Embedded Systems Forum
 is to look at how we architect systems for higher levels of assurance, 
not just security vulnerabilities, but other kinds of failures as well.
The question I will ask here is, if a system fails its 
service-level agreement (SLA)
 for whatever reason, whether it’s security or some other kind of 
vulnerability, is that a result of our ability to do system architecture
 or software created without provably secure or provably assured 
components or the ability of the system to react to those kind of 
failures? If you believe that, how do we change it? How do we accelerate
 the adoption of better practices in order to mitigate the whole 
spectrum of risk of failure of the digital enterprise?
Emphasis on protection
DeLong:
 Well, in high assurance systems, obviously we still treat them as very 
important detection of problems when they occur, recovery from problems,
 but we put a greater emphasis on prevention, and we try to put greater 
effort into prevention. 
You mentioned provably secure 
components, but provable security is only part of the picture. When you 
do prove, you prove a theorem, and in a reasonable system, a system of 
reasonable complexity, there isn’t just one theorem. There are tens, 
hundreds, or even thousands of theorems that are proved to establish 
certain properties in the system. 
It has to do with 
proofs of the various parts, proofs of how the parts combine, what are 
the claims we want to make for the system, how do the proofs provide 
evidence that the claims are justified, and what kind of argumentation 
do we use based on that set of evidence.
So we're 
looking at not just the proofs as little gems, if you will. A proof of a
 theorem  think of it as a gemstone, but how are they all combined into 
creating a system?
If a movie star walked out on the 
red carpet with a little burlap sack around her neck full of a handful 
of gemstones, we wouldn’t be as impressed as we are when we see a 
beautiful necklace that’s been done by a real master, who has taken tens
 or hundreds of stones and combined them in a very pleasing and 
beautiful way. 
And so we have to put as much 
attention, not just on the individual gemstones, which admittedly are 
created with very pure materials and under great pressure, but also how 
they are combined into a work that meets the purpose. 
And
 so we have assurance cases, we have compositional reasoning, and other 
things that have to come into play. It’s not just about the provable 
components and it’s a mistake that is sometimes made to just focus on 
the proof.
Remember,
 proof is really just a degree of demonstration, and we always want some
 demonstration to have confidence in the system, and proof is just an 
extreme degree of demonstration.
Mezzapelle: I think I 
would summarize it by embedding security early and often, and don’t 
depend on it 100 percent. That means you have to make your systems, your
 processes and your people resilient.
This has been a special BriefingsDirect presentation and panel discussion from The Open Group San Diego 2015.  Download a copy of the transcript. This follows an earlier discussion on cybersecurity standards for safer supply chains. Another earlier discussion from the event focused on synergies among major Enterprise Architecture frameworks. And a presentation by John Zachman, founder of the Zachman Framework.
Copyright The Open Group and Interarbor Solutions, LLC, 
2005-2015. All rights reserved.
You may also be interested in: