Tuesday, March 17, 2015

Health Shared Services BC harnesses a healthcare ecosystem using IT asset management

The next BriefingsDirect innovation panel discussion examines how Health Shared Services BC in Vancouver improves process efficiency and standardization through better integration across health authorities in British Columbia, Canada.

We'll explore how HSSBC has successfully implemented one of the healthcare industry’s first Service Asset and Configuration Management Systems to help them optimize performance of their IT systems and applications.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how HSSBC gains up-to-date single views of IT assets across a shared-services environment, please join me in welcoming our guests, Daniel Lamb, Project Manager for the ITSM Program, and Cam Haley, Program Manager for the ITSM Program, both at HSSBC. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Gentlemen, tell me first about the context of your challenge. You're an organization that's trying to bring efficiency and process improvements across health authorities in British Columbia. What is it about that task that made better IT service management (ITSM) an imperative?

Haley: If you look at the healthcare space, where it is right now within British Columbia, we have the opportunity to look at using our healthcare funding more efficiently and specifically focus on delivering more clinical outcomes for consumers of the services.

Haley
That was one of the main drivers behind the formation of HSSBC, to consolidate some of the key supporting and enabling services into an organization that could deliver a standardized set of service offerings across our health authority clients, so that they can focus on clinical delivery.

That was the key business driver around why we're here and why we are doing some of those things. For us to effectively deliver on that mandate, we need the tools and the process capabilities to be able to effectively deliver more consistent service outcomes, all those things that we want to deliver there, and to look at reducing cost a little long-term so that those cost could be again shifted into clinical delivery and to really enable those outcomes.

Necessary system

Gardner: Daniel, why was a Service Asset and Configuration Management System something that was important to accomplish this?
For the visibility you need
Get the HP Toolkit
For maximizing PMO success
Lamb: We have been in the process of a large data center migration project over the past three years, moving a lot of the assets out of Vancouver and into a new data center. We standardized on HP infrastructure up in Kamloops and we have -- when we put in all our Health Authorities assets, it's going to be upwards of around probably 6,500-7,000 servers to manage.
Lamb
As we merged to the super organization, the manual processes just don’t exist anymore. To keep those assets up-to-date we needed an automated system. The reason we went for those products, which included the asset side and the configuration service management, is that’s really our business. We're going to be managing all these assets for the organization and all the configuration items, and we are providing these services. So this is where the toolset really fitted our goals.

Gardner: So other than scale, size, and the migration, were there any other requirements or problems that you needed to solve that moving into this more modern ITSM capability delivered?

Haley: Just to build on what Daniel said, one of the key drivers in terms of identifying the toolset and the capabilities was to support the migration of infrastructure into the data center.

But along with that, we provide a set of services that go beyond data center. The tool capability that has been delivered in supporting that outcome enables us to focus on optimizing our processes, getting a better view into what's happening in our own environment. So having the configuration items (CIs) in the configuration management data base (CMDB), having the relationships develop both at the infrastructure level, but all the way up to the application or the business service level.

Now we have a view up and down the stack of what's going on. We get better analytics and better data, and we can make some better decisions as well around where we want to focus. What are the pain points that we need to target? We 're able to mine that stuff and really look at opportunities to optimize.

The tool allows us to standardize our processes and roll out the capabilities. Automation is built into the tool, which is fantastic for us in terms of taking that manual overhead out of that and really just allowing us to focus on other things. So it's been great.

Gardner: Any unexpected benefits, ancillary benefits, that come from the standardization with this visibility, knowing your organization better that maybe you didn't anticipate?

Up-to-date information

Lamb: We've been able to track down everything that’s out there. That’s one thing. We just didn’t know where everything was or what we had. So in terms of being able to forecast to the health authorities, "This is how much you need to part with for maintenance, that sort of thing," that was always a guess in the past. We now have that up-to-date information available.

This has also laid the platform for us to better take advantage of the new technologies that are coming in. So what HP is talking about at the moment, we can’t really take advantage of that, but they have this base platform. It’s going to allow us to take advantage of a lot of the new stuff that’s coming out.

Gardner: So in order to get the efficiency and cost benefits of new infrastructure and converged systems and data center efficiencies, having your ducks lined up and understood is a crucial first step.
For the visibility you need
Get the HP Toolkit
For maximizing PMO success
Lamb: Definitely.

Gardner: Looking down the road, what’s piquing your interest in terms of what HP is doing or new developments, or does this now allow you to then progress into other areas that you are interested in?

Lamb: Personally, I'm looking at obviously the new versions of the product sets we have at the moment. We've also been speaking to other customers on the success that we've had and giving them some lessons learned on how things worked.
One of the things that we have been able to do is enable our staff to be more effective at what they're doing.

Then, we're looking at some of other products we could build on to this -- the PPM, which is the Project Management toolset and the BSM, which is unified monitoring and that sort of thing. Being able to put those products on is where we'll start seeing even more value, like in terms of being able to reduce the amount of tickets and support cost and that sort of thing. So we're looking at that.

Then, just ad-hoc interest are the things around the big data and that sort of thing, just trying to get my head around how that works for us, because we have a lot of data. So some of those new technologies are coming out as well.

Gardner: Cam, given what you've already done, what has it gotten for you? What are some of the benefits and results that you have seen. Are there any metrics of success that you can share with us?

Haley: The first thing is that we're still pretty early in our journey out of the gate, if I just talk about what we've already achieved. One of the things that we have been able to do is enable our staff to be more effective at what they're doing.

We've implemented change management in particular within the toolset, and that’s giving us a more robust set of controls around what's actually happening and what’s actually going into the environment. That's been really important, not only for the staff, although there is bit of a learning curve around that, but in terms of the outcomes for our clients.

Comfort level

They have a higher comfort level that we have more insight or oversight into what’s actually happening in space and we are actually protecting the services that they need to deliver by putting those kinds of capabilities in. So from the process perspective, we've certainly been able to get some benefits in that area in particular.

From a client perspective, it's putting the toolset in it. It helps us develop that level of trust that we really need in order to have an effective partnering relationship with our clients. That’s something that hasn’t always been there in the past.

I'm not saying that we're all the way there yet, but we're starting to show that we can deliver the services that the health authorities expect us to deliver, and we are using the toolset to help enable that. That’s also an important aspect.

The other thing is that through the work we've done in terms of consolidating some of our contracts, maintenance agreements, and so on into our asset management system, we have a better view of what we're paying for. We've already realized some opportunities to consolidate some contracts and show some savings as well.
It helps us develop that level of trust that we really need in order to have an effective partnering relationship with our clients.

That's just a number of areas where we're already seeing some benefits. As we start to roll out more of the capabilities of the tool in the coming year and beyond that, we expect that we will get some of those standard metrics that you would typically get out of it. Of course, we'll continue to drive out the ROI value as well. So we're already a good way down that path, and we'll just continue to do that.

Gardner: Any words of wisdom, based on your journey so far, for other organizations that might be struggling with spreadsheets and tracking all of their assets and all of their devices and even the processes around IT support? What have you learned. What could you share to someone who is just starting out?
For the visibility you need
Get the HP Toolkit
For maximizing PMO success
Lamb: We had a few key lessons that we spoke about. One was the guiding principles that you are going to do the implementation by. We were very much of the approach that we would try to keep things as out-of-the-box as possible. HP, as they are doing the new releases, would pick up the functionality that we are looking for. So we didn’t do a lot of tailoring.
And we did the project in a short cycle. These projects can go on for years sometimes, and a lot of money can get sunk and there isn’t value gained sometimes. We said, "Let’s do these in more short sprint projects. We'll get something in, we'll start showing value to the organization, then we'll get into another thing." That’s the cycle that we're working in, and that's worked really well.

The other thing is that we had a great consultant partner that we worked with, and that was key. We were feeling a little lost when we came here last year, and that was one of the things we did. We went to a good consultant partner, Effectual Systems from San Francisco, and that helped us.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, March 12, 2015

Hackathon model plus big data equals big innovation for Thomson Reuters

The next BriefingsDirect innovation interview explores the use of a hackathon approach to unlock creativity in the search for better use of big data for analytics. We will hear how Thomson Reuters in London sought to foster innovation and derive more value from its vast trove of business and market information.

The result: A worldwide virtual hackathon that brought together developers and data scientists to uncover new applications, visualizations, and services to make all data actionable and impactful.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about getting developers on board the big-data analysis train, BriefingsDirect sat down with Chris Blatchford, Director of Platform Technology in the IT organization at Thomson Reuters in London. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Blatchford: Thomson Reuters is the world's leading source of intelligent information. We provide data across the finance, legal, news, IP, and science, tax, and accounting industries through product and service offerings, combining industry expertise with innovative technology.

Gardner: It’s hard to think of an organization where data and analysis is more important. It’s so core to your very mission.

Blatchford
Blatchford: Absolutely. We take data from a variety of sources. We have our own original data, third-party sources, open-data sources, and augmented information, as well as all of the original content we generate on a daily basis. For example, our journalists in the field provide original news content to us directly from all over the globe. We also have third-party licensed data that we further enrich and distribute to our clients through a variety of tools and services

Gardner: And therein lies the next trick, what to do with the data once you have it. About this hackathon, how did you come up upon that as an idea to foster innovation?

Big, Open, Linked Data

Blatchford: One of our big projects or programs of work currently is, as everyone else is doing, big data. We have an initiative called BOLD, which is Big, Open, Linked Data, headed up by Dan Bennett. The idea behind the project is to take all of the data that we ingest and host within Thomson Reuters, all of those various sources that I just explained, stream all of that into a central repository, cleanse the data, centralize it, extract meaningful information, and subsequently expose it to the rest of the businesses for use in their specific industry applications.

As well as creating a central data lake of content, we also needed to provide the tools and services that allow businesses to access the content; here we have both developed our own software and licensed existing tools.

So, we could demonstrate that we could build big-data tools using our internal expertise, and we could demonstrate that we could plug in third-party specific applications that could perform analysis on that data. What we hadn’t proved was that we could plug in third-party technology enterprise platforms in order to leverage our data and to innovate across that data, and that’s where HP came in.

HP was already engaged with us in a number of areas, and I got to speaking with their Big Data Group around their big data solutions. IDOL OnDemand came up. This is now part of the Haven OnDemand platform. We saw some synergies there between what we were doing with the big-data platform and what they could offer us in terms of their IDOL OnDemand API’s. That’s where the good stuff started.
Bringing human understanding to the cloud
Helping developers build a new class of apps
Gardner: Software developers, from the very beginning, have had a challenge of knowing their craft, but not knowing necessarily what their end users want them to do with that craft. So the challenge -- whether it’s in a data environment, a transactional environment or interface, or gaming -- has often been how to get the requirements of what you're up to into the minds of the developers in a way that they can work with. How did the hackathon contribute to solving that?

As well as creating a central data lake of content, we also need to provide the tools and services that allow businesses to access the content.
Blatchford: That’s a really good question. That’s actually one of the biggest challenges big data has in general. We approach big data in one of two ways. You have very specific use cases, for example, consider a lawyer working on a particular case for a client, it would be useful for them to analyze prior cases with similar elements. If they are able to extract entities and relevant attributes, they may be able to understand the case final decision, or perhaps glean information that is relevant to their current case.

Then you have the other approach, which is much more about exploration, discovering new insights, trends, and patterns. That’s similar to the the approach we wanted to take with the hackathon -- provide the data and the tools to our developers for them just to go and play with the data.

We didn’t necessarily want to give them any requirements around specific products or services. It was just, "Look, here is a cool platform with some really cool APIs and some capabilities. Here is some nice juicy data. Tell us what we should be doing? What can we come up with from your perspective on the world?"

A lot of the time, these engineers are overlooked. They're not necessarily the most extroverted of people by the nature of what they do and so they miss chances, they miss opportunities, and that’s something we really wanted to change.

Gardner: It’s fascinating the way to get developers to do what you want them to do is to give them no requirements.

Interesting end products

Blatchford: Indeed. That can result in some interesting end-products. But, by and large, our engineers are more commercially savvy than most, hence we can generally rely on them to produce something that will be compelling to the business. Many of our developers have side projects and personal development projects they work on outside of the realms of their job requirement. We should be encouraging this sort of behavior.

Gardner: So what did you get when you gave them no requirements? What happened?

Blatchford: We had 25 teams that submitted their ideas. We boiled that down to 7 finalists based upon a set of preliminary criteria, and out of those 7, we decided upon our first-, second-, and third-place winners. Those three end results were actually taken, or are currently going through a product review, to potentially be implemented into our product lines.

The overall winner was an innovative UI design for mobile devices, allowing users to better navigate our content on tablets and phones. There was a sentiment analysis tool, that allowed users to paste in news stories or any news content source on the web and extract sentiment from that news story.

And the other was more of an internally focused, administrative exploration tool, that  allowed us to more intuitively navigate our own data, which perhaps doesn’t initially seem as exciting as the other two, but is actually a hugely useful application for us.
Bringing human understanding to the cloud
Helping developers build a new class of apps
Gardner: Now, how does IDOL OnDemand come to play in this? IDOL is the ability to take any kind of information, for the most part, apply a variety of different services to it, and then create analysis as a service. How did that play into the hackathon? How did the developers use that?

Blatchford: Initially the developers looked at the original 50-plus APIs that IDOL OnDemand provides, and you have everything in there from facial recognition, to OCR, to text analytics, to indexing, all sorts of cool stuff. Those, in themselves, provided sufficient capabilities to produce some compelling applications, but our developers also utilized Thomson Reuters API’s and resources to further augment the IDOL platform.

This was very important, as it demonstrated that not only could we plug in an Enterprise analytics tool into our data, but also that it would fit well with our own capabilities.

Gardner: And HP Big Data also had a role in this. How did that provide value?

Five-day effort

Blatchford: The expertise. We should remember we stood this hackathon up from inception to completion in a little over one month, and that’s I think pretty impressive by any measure.

The actual hackathon lasted for five days. We gave the participants a week to get familiar with the APIs, but they really didn’t need that long because the documentation behind the APIs on IDOL OnDemand and the kind of "try it now" functionality it has was amazing. This is what the engineers and the developers were telling me. That’s not my own words.

The Big Data Group was able to stand this whole thing up within a month, a huge amount of effort on HP’s side that we never really saw. That ultimately resulted in a hugely successful virtual global hackathon. This wasn’t a physical hackathon. This was a purely virtual hackathon the world over.

Gardner: HP has been very close to developers for many years, with many tools, leading tools in the market for developers. They're familiar with the hackathon approach. It sounds like HP might have a business in hackathons as a service. You're proving the point here.

For the benefit of our listeners, if someone else out there was interested in applying the same approach, a hackathon as a way of creating innovation, of sparking new thoughts, light bulbs going off in people's heads, or bringing together cultures that perhaps hadn't meshed well in the past, what would you advise them?
First and foremost, the reason we were successful is because we had a motivated, willing partner in HP.

Blatchford: That’s a big one. First and foremost, the reason we were successful is because we had a motivated, willing partner in HP. They were able to put the full might of their resources and technology capabilities behind this event, and that along side our own efforts ultimately resulted in the events success.

That aside, you absolutely need to get the buy-in of the senior executives within an organization, get them to invest into the idea of something as open as a hackathon. A lot of hackathons are quite focused on a specific requirement. We took the opposite approach. We said, "Look, developers, engineers, go out there and do whatever you want. Try to be as innovative in your approach as possible."

Typically, that approach is not seen as cost effective, businesses like to have defined use cases, but sometimes that can strangle innovation. Sometimes we need to loosen the reins a little.

There are also a lot of logistical checks that can help. Ensure you have clear criteria around hackathon team size and members, event objectives, rules, time frames and so on. Having these defined up front makes the whole event run much smoother.

We ran the organization of the event a little like an Agile project, with regular stand-ups and check-ins. We also stood up a dedicated internal intranet site with all the information above. Finally, we set-up user accounts on the IDOL platform early on, so the participants could familiarize themselves with the technology.

Winning combination

Gardner: Yeah, it really sounds like a winning combination: the hackathon model, big data as the resource to innovate on, and then IDOL OnDemand with 50 tools to apply to that. It’s a very rich combination.

Blatchford: That’s exactly right. The richness in the data was definitely a big part of this. You don’t need millions of rows of data. We provided 60,000 records of legal documents and we had about the same in patents and news content. You don’t need vast amounts of data, but you need quality data.

Then you also need a quality platform as well. In this case IDOL OnDemand.The third piece is what’s in their heads. That really was the successful formula.
Bringing human understanding to the cloud
Helping developers build a new class of apps
Gardner: I have to ask. Of course, the pride in doing a good job goes a long way, but were there any other incentives; a new car, for example, for the winning hackathon application of the day?

Blatchford: Yeah, we offered a 1960s Mini Cooper to the winners. No, we didn't. We did offer other incentives. There were three main incentives. The first one, and the most important one in my view, and I think in everyone’s view, was exposure to senior executives within the organization. Not just face time, but promotion of the individual within the organization. We wanted this to be about personal growth as much as it was about producing new applications.

Going back to trying to leverage your resources and give them opportunities to shine, that’s really important. That’s one of the things the hackathon really fostered -- exposing our talented engineers and product managers, ensuring they are appreciated for the work they do.

We also provided an Amazon voucher incentive, and HP offered some of their tablets to the winners. So it was quite a strong winning set.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, March 10, 2015

Cybersecurity standards: The Open Group explores security and safer supply chains

Welcome to a special BriefingsDirect presentation and panel discussion from The Open Group San Diego 2015. This follows an earlier discussion from the event last month on synergies among major Enterprise Architecture frameworks with The Open Group.

The latest discussion, examining the both need and outlook for Cybersecurity standards among supply chains, is moderated by Dave Lounsbury, Chief Technology Officer, The Open Group; with guests Mary Ann Davidson, Chief Security Officer, Oracle; Dr. Ron Ross, Fellow of the National Institute of Standards and Technology (NIST), and Jim Hietala, Vice President of Security for The Open Group.  Download a copy of the transcript. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Dave Lounsbury: Mary Ann Davidson is responsible for Oracle Software Security Assurance and represents Oracle on the Board of Directors for the Information Technology Information Sharing and Analysis Center, and on the international Board of the ISSA.

Lounsbury
Dr. Ron Ross leads the Federal Information Security Management Act Implementation Project. It sounds like a big job to fulfill, developing the security standards and guidelines for the federal government.

This session is going to look at the cybersecurity and supply chain landscape from a standards perspective. So Ron and Mary Ann, thank you very much.

Ron Ross: All of us are part of the technology explosion and revolution that we have been experiencing for the last couple of decades.

I would like to have you leave today with a couple of major points, at least from my presentation, things that we have observed in cybersecurity for the last 25 years: where we are today and where I think we might need to go in the future. There is no right or wrong answer to this problem of cybersecurity. It’s probably one of the most difficult and challenging sets of problems we could ever experience.

Ross
In our great country, we work on what I call the essential partnership. It's a combination of government, industry, and academia all working together. We have the greatest technology producers, not just in this country, but around the world, who are producing some fantastic things to which we are all "addicted." I think we have an addiction to the technology.

Some of the problems we're going to experience going forward in cybersecurity aren't just going to be technology problems. They're going to be cultural problems and organizational problems. The key issue is how we organize ourselves, what our risk tolerance is, how we are going to be able to accomplish all of our critical missions and business operations that Dawn talked about this morning, and do so in a world that's fairly dangerous. We have to protect ourselves.

Movie app

I think I can sum it up. I was at a movie. I don’t go to movies very often anymore, but about a month ago, I went to a movie. I was sitting there waiting for the main movie to start, and they were going through all the coming attractions. Then they came on the PA and they said that there is an app you can download. I'm not sure you have ever seen this before, but it tells you for that particular movie when is the optimal time to go to the restroom during the movie.

I bring this up because that's a metaphor for where we are today. We are consumed. There are great companies out there, producing great technologies. We're buying it up faster than you can shake a stick at it, and we are developing the most complicated IT infrastructure ever.

So when I look at this problem, I look at this from a scientist’s point of view, an engineering point of view. I'm saying to myself, knowing what I know about what it takes  to -- I don't even use the word "secure" anymore, because I don’t think we can ever get there with the current complexity -- build the most secure systems we can and be able to manage risk in the world that we live in.

In the army, we used to have a saying. You go to war with the army that you have, not the army that you want. We’ve heard about all the technology advances, and we're going to be buying stuff, commercial stuff, and we're going to have to put it together into systems. Whether it’s the Internet of Things (IoT) or cyber-physical convergence, it all goes back to some fairly simple things.

http://www.oracle.com/us/corporate/press/executives/016331.htm
Davidson
The IoT and all this stuff that we're talking about today really gets back to computers. That’s the common denominator. They're everywhere. This morning, we talked about your automobile having more compute power than Apollo 11. In your toaster, your refrigerator, your building, the control of the temperature, industrial control systems in power plants, manufacturing plants, financial institutions, the common denominator is the computer, driven by firmware and software.

When you look at the complexity of the things that we're building today, we've gone past the time when we can actually understand what we have and how to secure it.

That's one of the things that we're going to do at NIST this year and beyond. We've been working in the FISMA world forever it seems, and we have a whole set of standards, and that's the theme of today: how can standards help you build a more secure enterprise?

The answer is that we have tons of standards out there and we have lots of stuff, whether it's on the federal side with 853 or the Risk Management Framework, or all the great things that are going on in the standards world, with The Open Group, or ISO, pick your favorite standard.

Hietala
The real question is how we use those standards effectively to change the current outlook and what we are experiencing today because of this complexity? The adversary has a significant advantage in this world, because of complexity. They really can pick the time, the place, and the type of attack, because the attack surface is so large when you talk about not just the individual products.

We have many great companies just in this country and around the world that are doing a lot to make those products more secure. But then they get into the engineering process and put them together in a system, and that really is an unsolved problem. We call it a Composability Problem. I can have a trusted product here and one here, but what is the combination of those two when you put them together in the systems context? We haven’t solved that problem yet, and it’s getting more complicated everyday.

Continuous monitoring

For the hard problems, we in the federal government do a lot of stuff in continuous monitoring. We're going around counting our boxes and we are patching stuff and we are configuring our components. That's loosely called cyber hygiene. It’s very important to be able to do all that and do it quickly and efficiently to make your systems as secure as they need to be.

But even the security controls in our control catalog, 853, when you get into the technical controls --  I'm talking about access control mechanisms, identification, authentication, encryption, and audit -- those things are buried in the hardware, the software, the firmware, and the applications.

Most of our federal customers can’t even see those. So when I ask them if they have all their access controls in place, they can nod their head yes, but they can’t really prove that in a meaningful way.

So we have to rely on industry to make sure those mechanisms, those functions, are employed within the component products that we then will put together using some engineering process.
So we have to rely on industry to make sure those mechanisms, those functions, are employed within the component products that we then will put together using some engineering process.

This is the below-the-waterline problem I talk about. We're in some kind of digital denial today, because below the water line, most consumers are looking at their smartphones, their tablets, and all their apps -- that’s why I used that movie example -- and they're not really thinking about those vulnerabilities, because they can't see them, until it affects them personally.

I had to get three new credit cards last year. I shop at Home Depot and Target, and JPMorgan Chase is our federal credit card. That’s not a pain point for me because I'm indemnified. Even if there are fraudulent charges, I don't get hit for those.

If your identity is stolen, that’s a personal pain point. We haven't reached that national pain point yet. All of the security stuff that we do we talk about it a lot and we do a lot of it, but if you really want to effect change, you're going to start to hear more at this conference about assurance, trustworthiness, and resiliency. That's the world that we want to build and we are not there today.

That's the essence of where I am hoping we are going to go. It's these three areas: software assurance, systems security engineering, and supply-chain risk management.

My colleague Jon Boyens is here today and he is the author, along with a very talented team of coauthors, of the NIST 800-161 document. That's the supply chain risk document.

It’s going to work hand-in-hand with another publication that we're still working on, the 800-160 document. We are taking an IEEE and an ISO standard, 15288, and we're trying to infuse into that standard. They are coming out with the update of that standard this year. We're trying to infuse security into every step of the lifecycle.

Wrong reasons

The reason why we are not having a lot of success on the cybersecurity front today is because security ends up appearing either too late or by the wrong people for the wrong reasons.

I'll give you one example. In the federal government, we have a huge catalog of security controls, and they are allocated into different baselines: low, moderate, and high. So you will pick a baseline, you will tailor, and you'll come to the system owner or the authorizing official and say, "These are all the controls that NIST says we have to do." Well, the mission business owner was never involved in that discussion.

One of the things we are going to do with the new document is focus on the software and systems engineering process from the start of the stakeholders, all the way through requirements, analysis, definition, design, development, implementation, operation, and sustainment, all the way to disposal. Critical things are going to happen at every one of those places in the lifecycle

The beauty of that process is that you involve the stakeholders early. So when those security controls are actually selected they can be traced back to a specific security requirement, which is part of a larger set of requirements that support that mission or business operation, and now you have the stakeholders involved in the process.

Up to this point in time, security operates in its own vacuum. It’s in the little office down the hall, and we go down there whenever there's a problem. But unless and until security gets integrated and we disappear as being our own discipline, we now are part of the Enterprise Architecture, whether it’s TOGAF® or whatever architecture construct you are following, or the systems engineering process. The system development lifecycle is the third one, and people ask what is acquisition and procurement.
Unless we have our stakeholders at those tables to influence, we are going to continue to deploy systems that are largely indefensible not against all cyber attacks but against the high-end attacks.

Unless we have our stakeholders at those tables to influence, we are going to continue to deploy systems that are largely indefensible not against all cyber attacks but against the high-end attacks.

We have to do a better job getting at the C-Suite and I tried to capture the five essential areas that this discussion has to revolve around. The acronym is TACIT, and it just happens to be a happy coincidence that it fit into an acronym. But it's basically looking at the threat, how you configure your assets, and how you categorize your assets with regard to criticality.

How complex is the system you're building? Are you managing that complexity in trying to reduce it, integrating security across the entire set of business practices within the organization? Then, the last component, which really ties into The Open Group, and the things you're doing here with all the projects that were described in the first session, that is the trustworthiness piece.

Are we building products and systems that are, number one, more penetration resistance to cyber attacks; and number two, since we know we can't stop all attacks, because we can never reduce complexity to where we thought we could two or three decades ago. Are we building the essential resiliency into that system. Even when the adversary comes to the boundary and the malware starts to work, how far does it spread, and what can it do?

That's the key question. You try to limit the time on target for the advisory, and that can be done very, very easily with good architectural and good engineering solutions. That's my message for 2015 and beyond, at least from a lot of things at NIST. We're going to start focusing on the architecture and the engineering, how to really affect things at the ground level?

Processes are important

Now we always will have the people, the processes, the technologies kind of this whole ecosystem that we have to deal with, and you're going to always have to worry about your sys admins that go bad and dump all the stuff that you don't want dumped on the Internet. But that's part of system process. Processes are very important because they give us structure, discipline, and the ability to communicate with our partners.

I was talking to Rob Martin from Mitre. He's working on a lot of important projects there with the CWEs, CVEs. It gives you the ability to communicate a level of trustworthiness and assurance that other people can have that dialogue, because without that, we're not going to be communicating with each other. We're not going to trust each other, and that's critical, having that common understanding. Frameworks provide that common dialogue of security controls in a common process, how we build things, and what is the level of risk that we are willing to accept in that whole process.

These slides, and they’ll be available, go very briefly into the five areas. Understanding the modern threat today is critical because, even if you don't have access to classified threat data, there's a lot of great data out there with Symantec and Verizon reports, and there's open-source threat information available.

If you haven't had a chance to do that, I know the folks who work on the high assurance stuff in The Open Group RT&ES. look at that stuff a lot, because they're building a capability that is intended to stop some of those types of threats.

The other thing about assets is that we don't do a very good job of criticality analysis. In other words, most of our systems are running, processing, storing, and transmitting data and we’re not segregating the critical data into its own domain where necessary.
Complexity is something that’s going to be very difficult to address because of our penchant for bringing in new technologies.

I know that's hard to do sometimes. People say, “I’ve got to have all this stuff ready to go 24×7,” but when you look at some of the really bad breaches that we have had over the last several years establishing a domain for critical data, where that domain can be less complex, which means you can better defend it, and then you can invest more resources into defending those things that are the most critical.

I used a very simple example of a safe deposit box. I can't get all my stuff into the safe deposit box. So I have to make decisions. I put important papers in there, maybe a coin collection, whatever.  I have locks on my house on the front door, but they're not strong enough to stop some of those bad guys out there. So I make those decisions. I put it in the bank, and it goes in a vault. It’s a pain in the butt to go down there and get the stuff out, but it gives me more assurance, greater trustworthiness. That's an example of the things we have to be able to do.

Complexity is something that’s going to be very difficult to address because of our penchant for bringing in new technologies. Make no mistake about it, these are great technologies. They are compelling. They are making us more efficient. They are allowing us to do things we never imagined, like finding out the optimal time to go to the restroom during a movie, I mean who could have imagined we could do that a decade ago.

But as with every one of our customers out there, the kinds of things we’re talking about flies below their radar. When you download 100 apps on your smartphone, people in general, even the good folks in cybersecurity, have no idea where those apps are coming from, where the pedigree is, have they been tested at all, have they been evaluated, are they running on a trusted operating system?

Ultimately, that's what this business is all about, and that's what 800-161 is all about. It's about a lifecycle of the entire stack from applications, to middleware, to operating systems, to firmware, to integrated circuits, to include the supply chain.

The adversary is all over that stack. They now figure out how to compromise our firmware so we have to come up with firmware integrity controls in our control catalog, and that's the world we live in today.

Managing complexity

I was smiling this morning when I talked about the DNI, the Director of National Intelligence in building their cloud, if that’s going to go to the public cloud or not. I think Dawn is probably right, you probably won’t see that going to the public cloud anytime soon, but cloud computing gives us an opportunity to manage complexity. You can figure out what you want to send to the public cloud.

They do a good job through the FedRAMP program of deploying controls and they’ve got a business model that's important to make sure they protect their customers’ assets. So that's built into their business model and they do a lot of great things out there to try to protect that information.

Then, for whatever stays behind in your enterprise, you can start to employ some of the architectural constructs that you'll see here at this conference, some of the security engineering constructs that we’re going to talk about in 800-160, and you can better defend what stays behind within your organization.

So cloud is a way to reduce that complexity. Enterprise Architecture, TOGAF, all of those architectural things allow you to provide discipline and structure and thinking about what you're building: how to protect it, how much it’s going to cost and is it worth it? That is the essence of good security. It’s not about running around with a barrel full of security controls or ISO 27000 saying, hey, you’ve got to do all this stuff, or this guy is going to fall, those days are over.

Integration we talked about. This is also hard. We are working with stovepipes today. Enterprise Architects typically don't talk to security people. Acquisition folks, in most cases, don't talk to security people.
The message I'm going to send everyday is that we have to be more informed consumers. We have to ask for things that we know we need.

I see it everyday. You see RFPs go out and there is a whole long list of requirements, and then, when it comes to security, they say the system or the product they are buying must be FISMA compliant. They know that’s a law and they know they have to do that, but they really don't give the industry or the potential contractors any specificity as to what they need to do to bring that product or the system to the state where it needs to be.

And so it's all about expectations. I believe our industry, whether it's here or overseas, wherever these great companies operate, the one thing we can be sure of is that they want to please their customers. So maybe what the message I'm going to send everyday is that we have to be more informed consumers. We have to ask for things that we know we need.

It’s like if you go back with the automobile. When I first started driving a long time ago,  40 years ago, cars just had seatbelts. There were no airbags and no steel-reinforced doors. Then, you could actually buy an airbag as an option at some point. When you fast-forward to today, every car has an airbag, seatbelt, steel-reinforced doors. It comes as part of the basic product. We don't have to ask for it, but as consumers we know it's there, and it's important to us.

We have to start to look at the IT business in the same way, just like when we cross a bridge or fly in an airplane. All of you who flew here in airplanes and came across bridges had confidence in those structures. Why? Because they are built with good scientific and engineering practices.

So least functionality, least privilege, those are kind of foundational concepts in our world and cybersecurity. You really can't look at a smartphone or a tablet and talk about least functionality anymore, at least if you are running that movie app, and you want to have all of that capability.

The last point about trustworthiness is that we have four decades of best practices in trusted systems development. It failed 30 years ago because we had the vision back then of trusted operating systems, but the technology and the development far outstripped our ability to actually achieve that.

Increasingly difficult

We talked about a kernel-based operating system having 2,000, 3,000, 4,000, 5,000 lines of code and being highly trusted. Well, those concepts are still in place. It’s just that now the operating systems are 50 million lines of code, and so it becomes increasingly difficult.

And this is the key thing. As a society, we're going to have to figure out, going forward, with all this great technology, what kind of world do we want to have for ourselves and our grandchildren? Because with all this technology, as good as it is, if we can’t provide a basis of security and privacy that customers can feel comfortable with, then at some point this party is going to stop.

I don't know when that time is going to come, but I call it the national pain point in this digital denial. We will come to that steady state. We just haven't had enough time yet to get to that balance point, but I'm sure we will.

I talked about the essential partnership, but I don't think we can solve any problem without a collaborative approach, and that's why I use the essential partnership: government, industry, and academia.
But the bottom line is that we have to work together, and I believe that we'll do that.

Certainly all of the innovation, or most of the innovation, comes from our great industry. Academia is critical, because the companies like Oracle or Microsoft want to hire students who have been educated in what I call the STEM disciplines: Science, Technology, Engineering -- whether it's "double e" or computer science -- and Mathematics. They need those folks to be able to build the kind of products that have the capabilities, function-wise, and also are trusted.

And government plays some role -- maybe some leadership, maybe a bully pulpit, cheerleading where we can -- bringing things together. But the bottom line is that we have to work together, and I believe that we'll do that. And when that happens I think all of us will be able to sit in that movie and fire up that app about the restroom and feel good that it's secure.

Mary Ann Davidson: I guess I'm preaching to the converted, if I can use a religious example without offending somebody. One of the questions you asked is, why do we even have standards in this area? And of course some of them are for technical reasons. Crypto it turns out is easy for even very smart people to get wrong. Unfortunately, we have reason to find out.

So there is technical correctness. Another reason would be interoperability to get things to work better in a more secure manner. I've worked in this industry long enough to remember the first SSL implementation, woo-hoo, and then it turns out 40 bits wasn't really 40, bits because it wasn’t random enough, shall we say.

Trustworthiness. ISO has a standard -- The Common Criteria. It’s an ISO standard. We talk about what does it mean to have secure software, what type of threats does it address, how do you prove that it does what you say you do? There are standards for that, which helps. It helps everybody. It certainly helps buyers understand a little bit more about what they're getting.

No best practices

And last, but not least, and the reason it’s in quotes, “best practices,” is because there actually are no best practices. Why do I say that -- and I am seeing furrowed brows back there? First of all, lawyers don't like them in contracts, because then if you are not doing the exact thing, you get sued.

There are good practices and there are worst practices. There typically isn't one thing that everyone can do exactly the same way that's going to be the best practice. So that's why that’s in quotation marks.

Generally speaking, I do think standards, particularly in general, can be a force for good in the universe, particularly in cybersecurity, but they are not always a force for good, depending on other factors.

And what is the ecosystem? Well, we have a lot of people. We have standards makers, people who work on them. Some of them are people who review things. Like when NIST is very good, which I appreciate, about putting drafts out and taking comments, as opposed to saying, "Here it is, take it or leave it." That’s actually a very constructive dialogue, which I believe a lot of people appreciate. I know that I do.

Sometimes there are mandators. You'll get an RFP that says, "Verily, thou shall comply with this, less thee be an infidel in the security realm." And that can be positive. It can  be a leading edge of getting people to do something good that, in many cases, they should do anyway.
You get better products in something that is not a monopoly market. Competition is good.

Implementers, who have to take this and decipher and figure out why they are doing it. People who make sure that you actually did what you said you were going to do.

And last, but not least, there are weaponizers. What do I mean by that? We all know who they are. They are people who will try to develop a standard and then get it mandated. Actually, it isn’t a standard. It’s something they came up with, which might be very good, but it’s handing them regulatory capture.

And we need to be aware of those people. I like the Oracle database. I have to say that, right? There are a lot of other good databases out there. If I went in and said, purely objectively speaking, everybody should standardize on the Oracle database, because it’s the most secure. Well, nice work if I can get it.

Is that in everybody else’s interest? Probably not. You get better products in something that is not a monopoly market. Competition is good.

So I have an MBA, or had one in a prior life, and they used to talk in the marketing class about the three Ps of marketing. Don’t know what they are anymore; it's been a while. So I thought I would come up with Four Ps of a Benevolent Standard, which are Problem Statement, Precise Language, Pragmatic Solutions, and Prescriptive Minimization.

Economic analysis

And the reason I say this is one of the kind of discussions I have to have a lot of times, particularly sometimes with people in the government. I'm not saying this in any pejorative way. So please don't take it that way. It's the importance of economic analysis, because nobody can do everything.

So being able to say that I can't boil the ocean, because you are going to boil everything else in it, but I can do these things. If I could do these things, it’s very clear what I am trying to do. It’s very clear what the benefit is. We've analyzed it, and it's probably something everybody can do. Then, we can get to better.

Better is better than omnibus. Omnibus is something everybody gets thrown under if you make something too big. Sorry, I had to say that.

So Problem Statement: why is this important? You would think it’s obvious, Mary Ann, except that it isn't, because so often the discussions I have with people, tell me what problem you are worried about? What are you trying to accomplish? If you don't tell me that, then we're going to be all over the map. You say potato and I say "potahto," and the chorus of that song is, "let’s call the whole thing off."
Buying a crappy product is a risk of doing business. It’s not, per se, a supply chain risk.

I use supply chain as an example, because this one is all over the map. Bad quality? Well, buying a crappy product is a risk of doing business. It’s not, per se, a supply chain risk. I'm not saying it’s not important, but it it’s certainly not a cyber-specific supply chain risk.

Bad security: well, that's important, but again, that’s a business risk.

Backdoor bogeyman: this is the popular one. How do I know you didn’t put a backdoor in there? Well, you can't actually, and that’s not a solvable problem.

Assurance, supply chain shutdown: yeah, I would like to know that a critical parts supplier isn’t going to go out of business. So these are all important, but they are all different problems.

So if you don't say what you're worried about, and it can't be all the above. Almost every business has some supplier of some sort, even if it’s just healthcare. If you're not careful how you define this, you will be trying to define a 100 percent of any entity's business operations. And that's not appropriate.

Use cases are really important, because you may have a Problem Statement. I'll give you one, and this is not to ding NIST in any way, shape, or form, but I just read this. It’s the Cryptographic Key Management System draft. The only reason I cite this as an example is that I couldn't actually find a use case in there.

So whatever the merits of that are saying, are you trying to develop a super secret key management system for government, very sensitive cryptographic things you are building from scratch, or you are trying to define a key management system that we have to use for things like TLS or any encryption that any commercial product does, because that's way out of scope?

So without that, what are you worried about? And also what’s going to happen is somebody is going to cite this in an RFP and it’s going to be, are you compliant with bladdy-blah? And you have no idea whether that even should apply.

Problem Statement

So that Problem Statement is really important, because without that, you can't have that dialogue in groups like this. Well, what are we trying to accomplish? What are we worried about? What are the worst problems to solve?

Precise Language is also very important. Why? Because it turns out everybody speaks a slightly different language, even if we all speak some dialect of geek, and that is, for example, a vulnerability.

If you say vulnerability to my vulnerability handling team, they think of that as a security vulnerability that’s caused by a defect in software.

But I've seen it used to include, well, you didn’t configure the product properly. I don’t know what that is, but it’s not a vulnerability, at least not to a vendor. You implemented a policy incorrectly. It might lead to vulnerability, but it isn’t one. So you are seeing where I am going with this. If you don’t have language to find very crisply the same thing, you read something and you go off and do it and you realize you solved the wrong problem.

I am very fortunate. One of my colleagues from Oracle, who works on our hardware, and I also saw a presentation by people in that group at the Cryptographic Conference in November. They talked about how much trouble we got into because if you say, "module" to a hardware person, it’s a very different thing from what it meant to somebody trying to certify it. This is a huge problem because again you say, potato, I say "potahto." It’s not the same thing to everybody. So it needs to be very precisely defined.
Everybody speaks a slightly different language, even if we all speak some dialect of geek, and that is, for example, a vulnerability.

Scope is also important. I don’t know why. I have to say this a lot and it does get kind of tiresome, I am sure to the recipients, COTS isn't GOTS. Commercial software is not government software, and it’s actually globally developed. That’s the only way you get commercial software, the feature rich, reads frequently. We have access to global talent.

It’s not designed for all threat environments. It can certainly be better, and I think most people are moving towards better software, most likely because we're getting beaten up by hackers and then our customers, and it’s good business. But there is no commercial market for high-assurance software or hardware, and that’s really important, because there is only so much that you can do to move the market.

So even a standards developer or big U.S. governments, is an important customer in the market for a lot of people, but they're not big enough to move the marketplace on their own, and so you are limited by the business dynamic.

So that's important, you can get to better. I tell people, "Okay, anybody here have a Volkswagen? Okay, is it an MRAP vehicle? No, it’s not, is it? You bought a Volkswagen and you got a Volkswagen. You can’t take a Volkswagen and drive it around streets and expect it to perform like an MRAP vehicle. Even a system integrator, a good one, cannot sprinkle pixie dust over that Volkswagen and turn it into an MRAP vehicle. Those are very different threat environments.

Why you think commercial software and hardware is different? It’s not different. It’s exactly the same thing. You might have a really good Volkswagen, and it’s great for commuting, but it is never going to perform in an IED environment. It wasn’t designed for that, and there is nothing you can do or make it designed to perform in that environment.

Pragmatism

Pragmatism; I really wish anybody working on any standard would do some economic analysis, because economics rules the world. Even if it’s something really good, a really good idea, time, money, and people, particularly qualified security people, are constrained resourses.

So if you make people do something that looks good on paper, but it’s really time-consuming, it’s an opportunity, the cost is too high. That means what is the value of something you could do with those resources that would either cost less or deliver higher benefit. And if you don’t do that analysis, then you have people say, "Hey, that’s a great idea. Wow, that’s great too. I’d like that." It’s like asking your kid, "Do you want candy. Do want new toys? Do want more footballs?" Instead of saying, "Hey, you have 50 bucks, what you are going to do with it?"

And then there are unintended consequences, because if you make this too complex, you just have fewer suppliers. People will never say, "I'm just not going to bid because it’s impossible." I'm going to give you three examples and again I'm trying to be respectful here. This is not to dis anybody who worked on these. In some cases, these things have been subsequent revisions that have been modified, which I really appreciate. But there are examples of, when you think about it, what were you asking for in the first place.
I really wish anybody working on any standard would do some economic analysis, because economics rules the world.

I think this was an early version of NISTR 7622 and has since been excised. There was a requirement that the purchaser wanted to be notified of personnel changes involving maintenance. Okay, what does that mean?

I know what I think they wanted, which is, if you are outsourcing the human resources for the Defense Department and you move the whole thing to "Hackistan," obviously they would want to be notified. I got that, but that’s not what it said.

So I look at that and say, we have 5,000 products, at least, at Oracle. We have billions and billions of lines of code everyday. Somebody checks out a transaction, getting some code, and they do some work on it and they didn’t write it in the first place.

So am I going to tweet all that to somebody. What’s that going to do for you? Plus you have things like the German Workers Council. We are going to tell the US Government that Jurgen worked on this line of code. Oh no, that’s not going to happen.

So what was it you were worried about, because that is not sustainable, tweeting people 10,000 times a day with code changes is just going to consume a lot of resource.

In another one, had this in an early version of something they were trying to do. They wanted to know, for each phase of development for each project, how many foreigners worked on it? What's a foreigner? Is it a Green Card holder? Is it someone who has a dual passport? What is that going to do for you?

Now again if you had a super custom code for some intelligence, I can understand there might be cases in which that would matter. But general-purpose software is not one of them. As I said, I can give you that information. We're a big company and we’ve got lots of resource. A smaller company probably can’t. Again, what will I do for you, because I am taking resources I could be using on something much more valuable and putting them on something really silly.

Last, but not least, and again, with respect, I think I know why this was in there. It might have been the secure engineering draft standard that you came up with that has many good parts to it.

Root cause analysis

I think vendors will probably understand this pretty quickly. Root Cause Analysis. If you have a vulnerability, one of the first things you should use is Root Cause Analysis. If you're a vendor and you have a CVSS 10 Security vulnerability in a product that’s being exploited, what do you think the first thing you are going to do is?

Get a patch in your customers’ hands or work around? Yeah, probably, that’s probably the number one priority. Also, Root Cause Analysis, particularly for really nasty security bugs, is really important. CVSS 0, who cares? But for 9 or 10, you should be doing that common analysis.

I’ve got a better one. We have a technology we have called Java. Maybe you’ve heard of it. We put a lot of work into fixing Java. One of the things we did is not only Root Cause Analysis, for CVSS 9 and higher. They have to go in front of my boss. Every Java developer had to sit through that briefing. How did this happen?

Last but not least, looking for other similar instances, not just root cause, how did that get in there and how do we avoid it. Where else does this problem exist. I am not saying this to make us look good; I 'm saying for the analytics. What are you really trying to solve here. Root Cause Analysis is important, but it's important in context. If I have to do it for everything, it's probably not the best use of a scarce resource.
If you mandate too much, it will stifle innovation and it won’t work for people.

My last point is to minimize prescriptiveness within limits. For example, probably some people in here don’t know how to bake or maybe you made a pie. There is no one right way to bake a cherry pie. Some people go down to Ralphs and they get a frozen Marie Callendar’s out of the freezer, they stick it in the oven, and they’ve got a pretty good cherry pie.

Some people make everything from scratch. Some people use a prepared pie crust and they do something special with the cherries they picked off their tree, but there is no one way to do that that is going to work for everybody.

Best practice for something. For example, I can say truthfully that a best development practice would not be just start coding, number one; and number two, it compiles without too many errors on the base platform, and ship it. That is not good development practice.

If you mandate too much, it will stifle innovation and it won’t work for people. Plus, as I mentioned, you will have an opportunity cost. If I'm doing something that somebody says I have to do, but there is a more innovative way of doing that.

We don’t have a single development methodology in Oracle, mostly because of acquisitions. We buy a great company, we don't tell them, "You know, that agile thing you are doing, it’s the last year. You have to do waterfall." That’s not going to work very well, but there are good practices even within those different methodologies.

Allowing for different hows is really important. Static analysis is one of them. I think static analysis is kind of industry practice now, and people should be doing it. Third party is really bad. I have been opining about this, this morning.

Third-party analysis

Let just say, I have a large customer, I won't name who used a third-party static analysis service. They broke their license agreement with us. They're getting a lot of it from us. Worse, they give us a report that included vulnerabilities from one of our competitors. I don’t want to know about those, right? I can't fix some. I did tell my competitor, "You should know this report exist, because I'm sure you want to analyze this."

Here's the worst part. How many of those vulnerabilities the third-party found you think had any merit? Run tool is nothing; analyzing results is everything. That customer and the vendor wasted the time of one of our best security leads, trying to make sure there was no there there, and there wasn't.

So again, and last but not least, government can use their purchasing power in lot of very good ways, but realize that regulatory things are probably going to lag actual practice. You could be specifying buggy whip standards and the reality is that nobody uses buggy whips anymore. It's not always about the standard, particularly if you are using resources in a less than optimal way.
This is one of the best forums I have seen, because there are people who have actual subject matter expertise to bring to the table.

One of the things I like about The Open Group is that here we have actual practitioners. This is one of the best forums I have seen, because there are people who have actual subject matter expertise to bring to the table, which is so important in saying what is going to work and can be effective.

The last thing I am going to say is a nice thank you to the people in the Trusted TTPF, because I appreciate the caliber of my colleagues, and also Sally Long. They talk about this type of an effort as herding cats, and at least for me, it's probably like herding a snarly cat. I can be very snarly. I'm sure you can pick up on that.

So I truly appreciate the professionalism and the focus and the targeting. Targeting a good slice of making a supply-chain problem better, not boiling the ocean, but very focused and targeted and with very high-caliber participation. So thank you to my colleagues and particularly thank you to Sally, and that’s it, I will turn it over to others.

Jim Hietala: We do, we have a few questions from the audience. So the first one and both here could feel free to chime in on this. Something you brought up Dr. Ross, building security in looking at software and systems engineering processes. How do you bring industry along in terms of commercial off-the-shelf products and services especially when you look at things like IoT, where we have got IP interfaces grafted on to all sorts of devices?

Ross: As Mary Ann was saying before, the strength of any standard is really its implementability out there. When we talk about, in particular, the engineering standard, the 15288 extension, if we do that correctly every organization out there who's already using -- let's say a security development lifecycle like the 27034, you can pick your favorite standard -- we should be able to reflect those activities in the different lanes of the 15288 processes.

This is a very important point that I got from Mary Ann’s discussion. We have to win the hearts and minds and be able to reflect things in a disciplined and structured process that doesn't take people off their current game. If they're doing good work, we should be able to reflect that good work and say, "I'm doing these activities whether it’s SDL, and this is how it would map to those activities that we are trying to find in the 15288."

And that can apply to the IoT. Again, it goes back to the computer, whether it’s Oracle database or a Microsoft operating system. It’s all about the code and the discipline and structure of building that software and integrating it into a system. This is where we can really bring together industry, academia, and government and actually do something that we all agree on.

Different take

Davidson: I would have a slightly different take on this. I know this is not a voice crying in the wilderness. My concern about the IoT goes back to things I learned in business school in financial market theory, which unfortunately has been borne out in 2008.

There are certain types of risks you can mitigate. If I cross a busy street, I'm worried about getting hit by a car. I can look both ways. I can mitigate that. You can't mitigate systemic risk. It means that you created a fragile system. That is the problem with the IoT, and that is a problem that no jury of engineering will solve.

If it's not a problem, why aren’t we giving nuclear weapons’ IP addresses? Okay, I am not making this up. The Air Force thought about that at one point. You're laughing. Okay, Armageddon, there is an app for that.
I really wish that people could look at this, not just in terms of how many of these devices and what a great opportunity, but what is a systemic risk that we are creating by doing this.

That's the problem. I know this is going to happen anyway. whether or not I approve of it, but I really wish that people could look at this, not just in terms of how many of these devices and what a great opportunity, but what is a systemic risk that we are creating by doing this.

My house is not connected to the Internet directly and I do not want somebody to shut my appliances off or shut down my refrigerator or lock it so that I can’t get into it or use that for launching an attack, those are the discussions we should be having -- at least as much as how we make sure that people designing these things have a clue.

Hietala: The next question is, how do customers and practitioners value the cost of security, and then a kind of related question on what can global companies due to get C-Suite attention and investment on cybersecurity, that whole ROI value discussion?

Davidson: I know they value it because nobody calls me up and says, "I am bored this week. Don’t you have more security patches for me to apply?" That’s actually true. We know what it costs us to produce a lot of these patches, and it’s important for the amount of resources we spend on that I would much rather be putting them on building something new and innovative, where we could charge money for it and provide more value to customers.

So it's cost avoidance, number one; number two more people have an IT backbone. They understand the value of having it be reliable. Probably one of the reasons people are moving to clouds is that it’s hard to maintain all these and hard to find the right people to maintain them. But also I do have more customers asking us now about our security practices, which is be careful what you wish for

I said this 10 years ago. People should be demanding. They know what we're doing and now I am going to spend a lot of time answering RFPs, but that’s good. These people are aware of this. They're running their business on our stuff and they want to know what kind of care we're taking to make sure we're protecting their data and their mission-critical applications as if it were ours.

Difficult question

Ross: The ROI question is very difficult with regard to security. I think this goes back to what I said earlier. The sooner we get security out of its stovepipe and integrated as just part of the best practices that we do everyday, whether it’s in the development work at a company or whether it’s in our enterprises as part of our mainstream organizational management things like the SDLC, or if we are doing any engineering work within the organization, or if we have the Enterprise Architecture group involved. That integration makes security less of  “hey, I am special” and more of just a part of the way we do business.

So customers are looking for reliability and dependability. They rely on this great bed of IT product systems and services and they're not always focused on the security aspects. They just want to make sure it works and that if there is an attack and the malware goes creeping through their system, they can be as protected as they need to be, and sometimes that flies way below their radar.

So it's got to be a systemic process and an organizational transformation. I think we have to go through it, and we are not quite there just yet.
So it's got to be a systemic process and an organizational transformation. I think we have to go through it, and we are not quite there just yet.

Davidson: Yeah, and you really do have to bake it in. I have a team of -- I’ve got three more headcount, hoo-hoo -- 45 people, but we have about 1,600 people in development whose jobs are to be security points of contact and security leads. They're the boots on the ground who implement our program, because I don't want to have an organization that peers over everybody’s shoulder to make sure they are writing good code. It's not cost-effective, not a good way to do it. It's cultural.

One of the ways that you do that is seeding those people in the organization, so they become the boots on the ground and they have authority to do things, because you’re not going to succeed otherwise.

Going back to Java, that was the first discussion I had with one of the executives that this is a cultural thing. Everybody needs to feel that he or she is personally responsible for security, not those 10-20 whatever those people are, whoever the security weenie is. It’s got to be everybody and when you can do that, you really have to see change and how things happen. Everybody is not going to be a security expert, but everybody has some responsibility for security.

This has been a special BriefingsDirect presentation and panel discussion from The Open Group San Diego 2015. Download a copy of the transcript. This follows an earlier discussion from the event on synergies among major Enterprise Architecture frameworks with The Open Group.

You may also be interested in: