Thursday, February 24, 2011

Open Group cloud panel forecasts cloud as spurring useful transition phase for enterprise architecture

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: The Open Group.

Welcome to a special discussion on predicting how cloud computing will actually unfold for enterprises and their core applications and services in the next few years. Part of The Open Group 2011 Conference in San Diego the week of Feb. 7, a live, on-stage panel examined the expectations of new types of cloud models -- and perhaps cloud specialization requirements -- emerging quite soon.

By now, we're all familiar with the taxonomy around public cloud, private cloud, software as a service (SaaS), platform as a service (PaaS), and my favorite, infrastructure as a service (IaaS). But we thought we would do you all an additional service and examine, firstly, where these general types of cloud models are actually gaining use and allegiance, and look at vertical industries and types of companies that are leaping ahead with cloud. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Then, second, we're going to look at why one-size-fits-all cloud services may not fit so well in a highly fragmented, customized, heterogeneous, and specialized IT world -- which is, of course, the world most of us live in.

How much of cloud services that come with a true price benefit -- and that’s usually at scale and cheap -- will be able to replace what is actually on the ground in many complex and unique enterprise IT organizations? Can a few types of cloud work for all of them?

Here to help us better understand the quest for "fit for purpose" cloud balance and to predict, at least for some time, the considerable mismatch between enterprise cloud wants and cloud provider offerings, is our panel: Penelope Gordon, co-founder of 1Plug Corp., based in San Francisco; Mark Skilton, Director of Portfolio and Solutions in the Global Infrastructure Services with Capgemini in London; Ed Harrington, Principal Consultant in Virginia for the UK-based Architecting the Enterprise organization; Tom Plunkett, Senior Solution Consultant with Oracle in Huntsville, Alabama, and TJ Virdi, Computing Architect in the CAS IT System Architecture Group at Boeing based in Seattle. The discussion was moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gordon: A lot of companies don’t even necessarily realize that they're using cloud services, particularly when you talk about SaaS. There are a number of SaaS solutions that are becoming more and more ubiquitous.

I see a lot more of the buying of cloud moving out to the non-IT line of business executives. If that accelerates, there is going to be less and less focus. Companies are really separating now what is differentiating and what is core to my business from the rest of it.

There's going to be less emphasis on, "Let’s do our scale development on a platform level" and more, "Let’s really seek out those vendors that are going to enable us to effectively integrate, so we don’t have to do double entry of data between different solutions. Let's look out for the solutions that allow us to apply the governance and that effectively let us tailor our experience with these solutions in a way that doesn’t impinge upon the provider’s ability to deliver in a cost effective fashion."

That’s going to become much more important. So, a lot of the development onus is going to be on the providers, rather than on the actual buyers.

Enterprise architects need to break out of the idea of focusing on how to address the boundary between IT and the business and talk to the business in business terms.

One way of doing that that I have seen as effective is to look at it from the standpoint of portfolio management. Where you were familiar with financial portfolio management, now you are looking at a service portfolio, as well as looking at your overall business and all of your business processes as a portfolio. How can you optimize at a macro level for your portfolio of all the investment decisions you're making, and how the various processes and services are enabled? Then, it comes down to a money issue.

Shadow IT

Harrington: We're seeing a lot of cloud uptake in the small businesses. I work for a 50-person company. We have one "sort of" IT person and we do virtually everything in the cloud. We have people in Australia and Canada, here in the States, headquartered in the UK, and we use cloud services for virtually everything across that. I'm associated with a number of other small companies and we are seeing big uptake of cloud services.

We talked about line management IT getting involved in acquiring cloud services. If you think we've got this thing called "shadow IT" today, wait a few years. We're going to have a huge problem with shadow IT.

From the architect’s perspective, there's lot to be involved with and a lot to play with. There's an awful lot of analysis to be done -- what is the value that the cloud solution being proposed is going to be supplying to the organization in business terms, versus the risk associated with it? Enterprise architects deal with change, and that’s what we're talking about. We're talking about change, and change will inherently involve risk.

The federal government has done some analysis. In particular, the General Services Administration (GSA), has done some considerable analysis on what they think they can save by going to, in their case, a public cloud model for email and collaboration services. They've issued a $6.7 million contract to Unisys as the systems integrator, with Google being the cloud services supplier.

So, the debate over the benefits of cloud, versus the risks associated with cloud, is still going on quite heatedly.

Skilton: From personal experience, there are probably three areas of adaptation of cloud into businesses. For sure, there are horizontal common services to which, what you call, the homogeneous cloud solution could be applied common to a number of business units or operations across a market.

But we're starting to increasingly see the need for customization to meet vertical competitive needs of a company or the decisions within that large company. So, differentiation and business models are still there, they are still in platform cloud as they were in the pre-cloud era.

But, the key thing is that we're seeing a different kind of potential that a business can do now with cloud -- a more elastic, explosive expansion and contraction of a business model. We're seeing fundamentally the operating model of the business growing, and the industry can change using cloud technology.

So, there are two things going on in the business and the technologies are changing because of the cloud.

... There are two more key points. There's a missing architecture practice that needs to be there, which is a workload analysis, so that you design applications to fit specific infrastructure containers, and you've got a bridge between the the application service and the infrastructure service. There needs to be a piece of work by enterprise architects (EAs) that starts to bring that together as a deliberate design for applications to be able to operate in the cloud. And the PaaS platform is a perfect environment.

The second thing is that there's a lack of policy management in terms of technical governance, and because of the lack of understanding. There needs to be more of a matching exercise going on. The key thing is that that needs to evolve.

Part of the work we're doing in The Open Group with the Cloud Computing Work Group is to develop new standards and methodologies that bridge those gaps between infrastructure, PaaS, platform development, and SaaS.

Plunkett: Another place we're seeing a lot of growth with regard to private clouds is actually on the defense side. The U.S. Defense Department is looking at private clouds, but they also have to deal with this core and context issue. The requirements for a [Navy] shipboard system are very different from the land-based systems.

Ships have to deal with narrow bandwidth and going disconnected. They also have to deal with coalition partners or perhaps they are providing humanitarian assistance and they are dealing even with organizations we wouldn’t normally consider military. So they have to deal with lots of information, assurance issues, and have completely different governance concerns that we normally think about for public clouds.

We talked about the importance of governance increasing as the IT industry went into SOA. Well, cloud is going to make it even more important. Governance throughout the lifecycle, not just at the end, not just at deployment, but from the very beginning.

If you think we've got this thing called "shadow IT" today, wait a few years. We're going to have a huge problem with shadow IT.



You mentioned variable workloads. Another place where we are seeing a lot of customers approach cloud is when they are starting a new project. Because then, they don’t have to migrate from the existing infrastructure. Instead everything is brand new. That’s the other place where we see a lot of customers looking at cloud, your greenfields.

Virdi: I think what we are really looking [to cloud] for speed to put new products into the market or evolve the products that we already have and how to optimize business operations, as well as reduce the cost. These may be parallel to any vertical industries, where all these things are probably going to be working as a cloud solution.

How to measure and create a new product or solutions is the really cool things you would be looking for in the cloud. And, it has proven pretty easy to put a new solution into the market. So, speed is also the big thing in there.

All these business decisions are going to be coming upstream, and business executives need to be more aware about how cloud could be utilized as a delivery model. The enterprise architects and someone with a technical background needs to educate or drive them to make the right decisions and choose the proper solutions.

It has an impact how you want to use the cloud, as well as how you get out of it too, in case you want to move to different cloud vendors or providers. All those things come into play upstream, rather than downstream.

You probably also have to figure out how you want to plan to adapt to the cloud. You don’t want to start as a Big Bang theory. You want to start in incremental steps, small steps, test out what you really want to do. If that works, then go do the other things after that.

Gordon: One example in talking about core and context is when you look in retail. You can have two retailers like a Walmart or a Costco, where they're competing in the same general space, but are differentiating in different areas.

Walmart is really differentiating on the supply chain, and so it’s not a good candidate for public cloud computing solutions. That might possibly be a candidate for private cloud computing.

But that’s really where they're going to invest in the differentiating, as opposed to a Costco, where it makes more sense for them to invest in their relationship with their customers and their relationship with their employees. They're going to put more emphasis on those business processes, and they might be more inclined to outsource some of the aspects of their supply chain.

Hard to do it alone

Skilton: The lessons that we're learning in running private clouds for our clients is the need to have a much more of a running-IT-as-a-business ethos and approach. We find that if customers try to do it themselves, either they may find that difficult, because they are used to buying that as a service, or they have to change their enterprise architecture and support service disciplines to operate the cloud.

Also, fundamentally the data center and network strategies need to be in place to adopt cloud. From my experience, the data center transformation or refurbishment strategies or next generation networks tend to be done as a separate exercise from the applications area. So a strong, strong recommendation from me would be to drive a clear cloud route map to your data center.

Harrington: Again, we're back to the governance, and certification of some sort. I'm not in favor of regulation, but I am in favor of some sort of third-party certification of services that consumers can rely upon safely. But, I will go back to what I said earlier. It's a combination of governance, treating the cloud services as services per se, and enterprise architecture.

Plunkett: What we're seeing with private cloud is that it’s actually impacting governance, because one of the things that you look at with private cloud is charge-back between different internal customers. This is forcing these organizations to deal with complexity, money, and business issues that they don't really like to do.

Nowadays, it's mostly vertical applications, where you've got one owner who is paying for everything. Now, we're actually going back to, as we were talking about earlier, dealing with some of the tricky issues of SOA.

Securing your data

Virdi: Private clouds actually allow you to make more business modular. Your capability is going to be a little bit more modular and interoperability testing could happen in the private cloud. Then you can actually use those same kind of modular functions, utilize the public cloud, and work with other commercial off-the-shelf (COTS) vendors that really package this as new holistic solutions.

Configuration and change management -- how in the private cloud we are adapting to it and supporting different customer segments is really the key. This could be utilized in the public cloud too, as well as how you are really securing your information and data or your business knowledge. How you want to secure that is key, and that's why the private cloud is there. If we can adapt to or mimic the same kind of controls in the public cloud, maybe we'll have more adoptions in the public cloud too.

Gordon: I also look at it in a little different way. For example, in the U.S., you have the National Security Agency (NSA). For a lot of what you would think of as their non-differentiating processes, for example payroll, they can't use ADP. They can't use that SaaS for payroll, because they can't allow the identities of their employees to become publicly known.

Anything that involves their employee data and all the rest of the information within the agency has to be kept within a private cloud. But, they're actively looking at private cloud solutions for some of the other benefits of cloud.

In one sense, I look at it and say that private cloud adoption to me tells a provider that this is an area that's not a candidate for a public-cloud solution. But, private clouds could also be another channel for public cloud providers to be able to better monetize what they're doing, rather than just focusing on public cloud solutions.

Impact on mergers and acquisitions

Plunkett: Not to speak on behalf of Oracle, but we've gone through a few mergers and acquisitions recently, and I do believe that having a cloud environment internally helps quite a bit. Specifically, TJ made the earlier point about modularity. Well, when we're looking at modules, they're easier to integrate. It’s easier to recompose services, and all the benefits of SOA really.

That kind of thinking, the cloud constructs applied up at a business architecture level, enables the kind of business expansion that we are looking at.



Gordon: If you are going to effectively consume and provide cloud services, if you do become much more rigorous about your change management, your configuration management, and if you then apply that out to a larger process level. Some of this comes back to some of the discussions we were having about the extra discipline that comes into play.

So, if you define certain capabilities within the business in a much more modular fashion, then, when you go through that growth and add on people, you have documented procedures and processes. It’s much easier to bring someone in and say, "You're going to be a product manager, and that job role is fungible across the business."

That kind of thinking, the cloud constructs applied up at a business architecture level, enables the kind of business expansion that we are looking at.

Harrington: [As for M&As], it depends a lot on how close the organizations are, how close their service portfolios are, to what degree has each of the organizations adapted the cloud, and is that going to cause conflict as well. So I think there is potential.

Skilton: Right now, I'm involved in merging in a cloud company that we bought last year in May ... It’s kind of a mixed blessing with cloud. With our own cloud services, we acquire these new companies, but we still have the same IT integration problem to then exploit that capability we've acquired.

Each organization in the commercial sector can have different standards, and then you still have that interoperability problem that we have to translate to make it benefit, the post merger integration issue. It’s not plug and play yet, unfortunately.

But, the upside is that I can bundle that service that we acquired, because we wanted to get that additional capability, and rewrite design techniques for cloud computing. We can then launch that bundle of new service faster into the market.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: The Open Group.

You may also be interested in:

Monday, February 21, 2011

The Open Trusted Technology Provider Framework Aims to No Less Than Secure the Global IT Supply Chain

This guest post is courtesy of Andras Szakal, IBM Distinguished Engineer and Director of IBM's Federal Software Architecture team.

By Andras Szakal

Nearly two months ago, we announced the formation of The Open Group Trusted Technology Forum (OTTF), a global standards initiative among technology companies, customers, government and supplier organizations to create and promote guidelines for manufacturing, sourcing, and integrating trusted, secure technologies.

The OTTF’s purpose is to shape global procurement strategies and best practices to help reduce threats and vulnerabilities in the global supply chain. I’m proud to say that we have just completed our first deliverable toward achieving our goal: The Open Trusted Technology Provider Framework (O-TTPF) whitepaper.

The framework outlines industry best practices that contribute to the secure and trusted development, manufacture, delivery and ongoing operation of commercial software and hardware products. Even though the OTTF has only recently been announced to the public, the framework and the work that led to this whitepaper have been in development for more than a year: first as a project of the Acquisition Cybersecurity Initiative, a collaborative effort facilitated by The Open Group between government and industry verticals under the sponsorship of the U.S. Department of Defense (OUSD (AT&L)/DDR&E).

The framework is intended to benefit technology buyers and providers across all industries and across the globe concerned with secure development practices and supply chain management. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Over the past year, OTTF member organizations have been hard at work collaborating, sharing and identifying secure engineering and supply chain integrity best practices that currently exist



More than 15 member organizations joined efforts to form the OTTF as a proactive response to the changing cyber security threat landscape, which has forced governments and larger enterprises to take a more comprehensive view of risk management and product assurance. Current members of the OTTF include Atsec, Boeing, Carnegie Mellon SEI, CA Technologies, Cisco Systems, EMC, Hewlett-Packard, IBM, IDA, Kingdee, Microsoft, MITRE, NASA, Oracle, and the U.S. Department of Defense (OUSD(AT&L)/DDR&E), with the forum operating under the stewardship and guidance of The Open Group.

Over the past year, OTTF member organizations have been hard at work collaborating, sharing and identifying secure engineering and supply chain integrity best practices that currently exist. These best practices have been compiled from a number of sources throughout the industry including cues taken from industry associations, coalitions, traditional standards bodies and through existing vendor practices. OTTF member representatives have also shared best practices from within their own organizations.

From there, the OTTF created a common set of best practices distilled into categories and eventually categorized into the O-TTPF whitepaper. All this was done with a goal of ensuring that the practices are practical, outcome-based, aren’t unnecessarily prescriptive and don’t favor any particular vendor.

The framework

T
he diagram below outlines the structure of the framework divided into categories that outline a hierarchy of how the OTTF arrived at the best practices it created.

















Trusted technology provider categories

Best practices were grouped by category because the types of technology development, manufacturing or integration activities conducted by a supplier are usually tailored to suit the type of product being produced, whether it is hardware, firmware, or software-based. Categories may also be aligned by manufacturing or development phase so that, for example, a supplier can implement a secure engineering/development method if necessary.

Provider categories outlined in the framework include:
  • Product engineering/development method
  • Secure engineering/development method
  • Supply chain integrity method
  • Product evaluation method
  • Establishing conformance and determining accreditation
In order for the best practices set forth in the O-TTPF to have a long-lasting effect on securing product development and the supply chain, the OTTF will define an accreditation process. Without an accreditation process, there can be no assurance that a practitioner has implemented practices according to the approved framework.

After the framework is formally adopted as a specification, The Open Group will establish conformance criteria and design an accreditation program for the O-TTPF. The Open Group currently manages multiple industry certification and accreditation programs, operating some independently and some in conjunction with third party validation labs. The Open Group is uniquely positioned to provide the foundation for creating standards and accreditation programs. Since trusted technology providers could be either software or hardware vendors, conformance will be applicable to each technology supplier based on the appropriate product architecture.

At this point, the OTTF envisions a multi-tiered accreditation scheme, which would allow for many levels of accreditation including enterprise-wide accreditations or a specific division.



At this point, the OTTF envisions a multi-tiered accreditation scheme, which would allow for many levels of accreditation including enterprise-wide accreditations or a specific division. An accreditation program of this nature could provide alternative routes to claim conformity to the O-TTPF.

Over the long-term, the OTTF is expected to evolve the framework to make sure its industry best practices continue to ensure the integrity of the global supply chain. Since the O-TTPF is a framework, the authors fully expect that it will evolve to help augment existing manufacturing processes rather than replace existing organizational practices or policies.

There is much left to do, but we’re already well on the way to ensuring the technology supply chain stays safe and secure. If you’re interested in shaping the Trusted Technology Provider Framework best practices and accreditation program, please join us in the OTTF.

Download the O-TTPF paper, or read the OTTPF in full here.

This guest post is courtesy of Andras Szakal, IBM Distinguished Engineer and Director of IBM's Federal Software Architecture team.

You may also be interested in:

Friday, February 18, 2011

Explore the role and impact of the Open Trusted Technology Forum to help ensure secure IT products in global supply chains

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Get the free white paper. Sponsor: The Open Group.

Join a panel discussion that examines The Open Group’s new Open Trusted Technology Forum (OTTF), established in December to find ways to safely conduct global procurement and supply-chain commerce among and between technology acquirers and buyers.

The forum is tasked with providing transparency, collaboration, innovation, and more trust on the partners and market participants in a IT supplier environment. The goal is for the OTTF to lead to improved business risk for global supply activities in the IT field. [Get the new free OTTF white paper.]

Presented in conjunction with The Open Group Conference held in San Diego, the week of Feb. 7, the panel of experts examines how the OTTF will function, what its new framework will be charged with providing, and ways that participants in the global IT commerce ecosystem can become involved with and perhaps use the OTTF’s work to their advantage.

Here with us to delve into the mandate and impact of the Open Trusted Technology Forum is the panel: Dave Lounsbury, Chief Technology Officer for The Open Group; Steve Lipner, Senior Director of Security Engineering Strategy in Microsoft’s Trustworthy Computing Group; Andras Szakal, Chief Architect in IBM’s Federal Software Group and an IBM distinguished engineer, and Carrie Gates, Vice President and Research Staff Member at CA Labs. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Lounsbury: The OTTF is a group that came together under the umbrella of The Open Group to identify and develop standards and best practices for trusting supply chains. It's about how one consumer in a supply chain could trust their partners and how they will be able to indicate their use of best practices in the market, so that people who are buying from the supply chain or buying from a specific vendor will be able to know that they can procure this with a high level of confidence.

This actually started a while ago at The Open Group by a question from the U.S. Department of Defense (DoD), which faced the challenge of buying commercial off-the-shelf product. Obviously, they wanted to take advantage of the economies of scale and the pace of technology in the commercial supply chain, but realized that means they're not going to get purpose-built equipment, that they are going to buy things from a global supply chain.

If you pick up any piece of technology, for example, it will be designed in the US, assembled in Mexico, and built in China. So we need that international and global dimension in production of this set of standards as well.

They asked, "What would we look for in these things that we are buying to know that people have used good engineering practices and good supply chain management practices? Do they have a good software development methodology? What would be those indicators?"

Now, that was a question from the DoD, but everybody is on somebody’s supply chain. People buy components. The big vendors buy components from smaller vendors. Integrators bring multiple systems together.

So, this is a really broad question in the industry. Because of that, we felt the best way to address this was bring together a broad spectrum of industry to come in, identify the practices that they have been using -- your real, practical experience -- and bring that together within a framework to create a standard for how we would do that.

Szakal: In today’s environment, we're seeing a bit of a paradigm shift. We're seeing technology move out of the traditional enterprise infrastructure. We're seeing these very complex value chains be created. We're seeing cloud computing.

Smarter infrastructures

We're actually working to create smarter infrastructures that are becoming more intelligent, automated, and instrumented, and they are very much becoming open-loop systems.

As technology becomes more pervasive and gets integrated into these environments, into the critical infrastructure, we have to consider whether they are vulnerable and how the components that have gone into these solutions are trustworthy.

Governments worldwide are asking that question. They're worried about critical infrastructure and the risk of using commercial, off-the-shelf technology -- software and hardware -- in a myriad of ways, as it gets integrated into these more complex solutions.

Part of our focus here is to help our constituents, government customers and critical infrastructure customers, understand how the commercial technology manufacturers, the software development manufactures, go about engineering and managing their supply chain integrity.

[The OTTF] is about all types of technology. Software obviously is a particularly important focus, because it’s at the center of most technology anyway. Even if you're developing a chip, a chip has some sort of firmware, which is ultimately software. So that perception is valid to a certain extent, but no, not just software, hardware as well.

Our vision is that we want to leverage some of the capability that's already out there. Most of us go through common criteria evaluations and that is actually listed as a best practice for a validating security function and products.

Where we are focused, from a [forthcoming] accreditation point of view, affects more than just security products. That's important to know. However, we definitely believe that the community of assessment labs that exists out there that already conducts security evaluations, whether they be country-specific or that they be common criteria, needs to be leveraged. We'll endeavor to do that and integrate them into both the membership and the thinking of the accreditation process.

Lounsbury: The Open Group provides the framework under which both buyers and suppliers at any scale could come together to solve a common problem -- in this case, the question of providing trusted technology best practices and standards. We operate a set of proven processes that ensure that everyone has a voice and that all these standards go forward in an orderly manner.

The new OTTF white paper actually lays out the framework. The work of forum is to turn that framework into an Open Group standard and populate it. That will provide the standards and best practice foundation for this conformance program. [Get the new free OTTF white paper.]

We're just getting started on the vision for a conformance program. One of the challenges here is that first, not only do we have to come up with the standard and then come up with the criteria by which people would submit evidence, but you also have to deal with the problem of scale.

If we really want to address this problem of global supply chains, we're talking about a very large number of companies around the world. It’s a part of the challenge that the forum faces.

Accrediting vendors

Part of the work that they’ve embarked on is, in fact, to figure out how we wouldn't necessarily do that kind of conformance one on one, but how we would accredit either vendors themselves who have their own duty of quality processes as a big vendor would or third parties who can do assessments and then help provide the evidence for that conformance.

We're getting ahead of ourselves here, but there would be a certification authority that would verify that all the evidence is correct and grant some certificate that says that they have met some or all of the standards.

The white paper actually lays out the framework. The work of forum is to turn that framework into an Open Group standard and populate it.



We provide infrastructure for doing that ... The Open Group operates industry-based conformance programs, the certification programs, that allow someone who is not a member to come in and indicate their conformance standard and give evidence that they're using the best practices there.

Lipner: Build with integrity really means that the developer who is building a technology product, whether it be hardware or software, applies best practices and understood techniques to prevent the inclusion of security problems, holes, bugs, in the product -- whether those problems arise from some malicious act in the supply chain or whether they arise from inadvertent errors. With the complexity of modern software, it’s likely that security vulnerabilities can creep in.

So, what build with integrity really means is that the developer applies best practices to reduce the likelihood of security problems arising, as much as commercially feasible.

And not only that, but any given supplier has processes for convincing himself that upstream suppliers, component suppliers, and people or organizations that he relies on, do the same, so that ultimately he delivers as secure a product as possible.

Creating a discipline

One of the things we think that the forum can contribute is a discipline that governments and potentially other customers can use to say, "What is my supplier actually doing? What assurance do I have? What confidence do I have?"

To the extent that the process is successful, why then customers will really value the certification? And will that open markets or create preferences in markets for organizations that have sought and achieved the certification?

Obviously, there will be effort involved in achieving [pending OTTF] certification, but that will be related to real value, more trust, more security, and the ability of customers to buy with confidence.

The challenge that we'll face as a forum going forward is to make the processes deterministic and cost-effective. I can understand what I have to do. I can understand what it will cost me. I won't get surprised in the certification process and I can understand that value equation. Here's what I'm going to have to do and then here are the markets and the customer sets, and the supply chains it's going to open up to me.

International trust

Gates: This all helps tremendously in improving trust internationally. We're looking at developing a framework that can be applied regardless of which country you're coming from. So, it is not a US-centric framework that we'll be using and adhering to.

We're looking for a framework so that each country, regardless of its government, regardless of the consumers within that country, all of them have confidence in what it is that we're building, that we're building with integrity, that we are concerned about both malicious acts or inadvertent errors.

And each country has its own bad guy, and so by adhering to international standard we can say we're looking for bad guys for every country and ensuring that what we provide is the best possible software.

If you refer to our white paper, we start to address that there. We were looking at a number of different metrics across the board. For example, what do you have for documentation practices? Do you do code reviews? There are a number of different best practices that are already in the field that people are using. Anyone who wants to be a certified, can go and look at this document and say, "Yes, we are following these best practices" or "No, we are missing this. Is it something that we really need to add? What kind of benefit it will provide to us beyond the certification?"

Lounsbury: The white paper’s name, "The Open Trusted Technology Provider Framework" was quite deliberately chosen. There are a lot of practices out there that talk about how you would establish specific security criteria or specific security practices for products. The Open Trusted Technology Provider Forum wants to take a step up and not look at the products, but actually look at the practices that the providers employ to do that. So it's bringing together those best practices.

Now, good technology providers will use good practices, when they're looking at their products, but we want to make sure that they're doing all of the necessary standards and best practices across the spectrum, not just, "Oh, I did this in this product."

Again, I refer everybody to the white paper, which is available on The Open Group website. You'll see there in the categories that we've divided these kinds of best practice into four broad categories: product engineering and development methods, secure engineering development methods, supply chain integrity methods and the product evaluation methods.

Under there those are the categories, we'll be looking at the attributes that are necessary to each of those categories and then identifying the underlying standards or bits of evidence, so people can submit to indicate their conformance.

I want to underscore this point about the question of the cost to a vendor. The objective here is to raise best practices across the industry and make the best practice commonplace. One of the great things about an industry-based conformance program is that it gives you the opportunity to take the standards and those categories that we've talked about as they are developed by OTTF and incorporate those in your engineering and development processes.

Within secure engineering, for example, one of the attributes is threat assessment and threat modeling.



So you're baking in the quality as you go along, and not trying to have an expensive thing going on at the end.

Szakal: [To attain such quality and lowered risk] we have three broad categories here and we've broken each of the categories into a set of principles, what we call best practice attributes. One of those is secure engineering. Within secure engineering, for example, one of the attributes is threat assessment and threat modeling. Another would be to focus on lineage of open-source. So, these are some of the attributes that go into these large-grained categories.

Unpublished best practices

Steve and I have talked a lot about this. We've worked on his secure engineering initiative, his SDLC initiative within Microsoft. I worked on and was co-author of the IBM Secure Engineering Framework. So, these are living examples that have been published, but are proprietary, for some of the best practices out there. There are others, and in many cases, most companies have addressed this internally, as part of their practices without having to publish them.

Part of the challenge that we are seeing, and part of the reason that Microsoft and IBM went to the length of publishing there is that government customers and critical infrastructure were asking what is the industry practice and what were the best practices.

What we've done here is taken the best practices in the industry and bringing them together in a way that's a non-vendor specific. So you're not looking to IBM, you're not having to look at the other vendors' methods of implementing these practices, and it gives you a non-specific way of addressing them based on outcome.

We believe that this is going to actually help vendors mature in these specific areas. Governments recognize that, to a certain degree, the industry is not a little drunk and disorderly and we do actually have a view on what it means to develop product in a secure engineering manner and that we have supply chain integrity initiatives out there. So, those are very important.

We're not simply focused on a bunch of security controls here. This is industry continuity and practices for supply chain integrity, as well as our internal manufacturing practices around the actual practice and process of engineering or software development, as well as supply chain integrity practices.

That's a very important point to be made. This is not a traditional security standard, insomuch as that we've got a hundred security controls that you should always go out and implement. You're going to have certain practices that make sense in certain situations, depending on the context of the product you're manufacturing.

Gates: In terms of getting started, the white paper is an excellent resource to get started and understand how the OTTF is thinking about the problem. How we are sort of structuring things? What are the high-level attributes that we are looking at? Then, digging down further and saying, "How are we actually addressing the problem?"

We had mentioned threat modeling, which for some -- if you're not security-focused -- might be a new thing to think about, as an example, in terms of your supply chain. What are the threats to your supply chain? Who might be interested, if you're looking at malicious attack, in inserting something into your code? Who are your customers and who might be interested in potentially compromising them? How might you go about protecting them?

The security mindset is a little bit different, in that you tend to be thinking about who is it that would be interested in doing harm and how do you prevent that.

It's not a normal way of thinking about problems. Usually, people have a problem, they want to solve it, and security is an add-on afterward. We're asking that they start that thinking as part of their process now and then start including that as part of their process.

Lipner: We talk about security assurance, and assurance is really what the OTTF is about, providing developers and suppliers with ways to achieve that assurance in providing their customers ways to know that they have done that. This is really not about adding some security band-aid onto a technology or a product. It's really about the fundamental attributes or assurance of the product or technology that’s being produced.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Get the free white paper. Sponsor: The Open Group.

You may also be interested in:

Wednesday, February 16, 2011

Expert panel: As cyber security risks grow, architected protection and best practices must keep pace

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

L
ooking back over the past few years, it seems like cyber security and warfare threats are only getting worse. We've had the Stuxnet Worm, the WikiLeaks affair, China-originating attacks against Google and others, and the recent Egypt Internet blackout.

But, are cyber security dangers, in fact, getting that much worse? And are perceptions at odds with what is really important in terms of security protection? How can businesses best protect themselves from the next round of risks, especially as cloud, mobile, and social media and networking activities increase? How can architecting for security become effective and pervasive?

We posed these and other serious questions to a panel of security experts at the recent The Open Group Conference, held in San Diego the week of Feb. 7, to examine the coming cyber security business risks, and ways to head them off.

The panel: Jim Hietala, the Vice President of Security at The Open Group; Mary Ann Mezzapelle, Chief Technologist in the CTO's Office at HP, and Jim Stikeleather, Chief Innovation Officer at Dell Services. The discussion was moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Stikeleather: The only secure computer in the world right now is the one that's turned off in a closet, and that's the nature of things. You have to make decisions about what you're putting on and where you're putting it on. I's a big concern that if we don't get better with security, we run the risk of people losing trust in the Internet and trust in the web.

When that happens, we're going to see some really significant global economic concerns. If you think about our economy, it's structured around the way the Internet operates today. If people lose trust in the transactions that are flying across it, then we're all going to be in pretty bad world of hurt.

One of the things that you're seeing now is a combination of security factors. When people are talking about the break-ins, you're seeing more people actually having discussions of what's happened and what's not happening. You're seeing a new variety of the types of break-ins, the type of exposures that people are experiencing. You're also seeing more organization and sophistication on the part of the people who are actually breaking in.

The other piece of the puzzle has been that legal and regulatory bodies step in and say, "You are now responsible for it." Therefore, people are paying a lot more attention to it. So, it's a combination of all these factors that are keeping people up at night.

A major issue in cyber security right now is that we've never been able to construct an intelligent return on investment (ROI) for cyber security.

We're starting to see a little bit of a sea change, because starting with HIPAA-HITECH in 2009, for the first time, regulatory bodies and legislatures have put criminal penalties on companies who have exposures and break-ins associated with them.



There are two parts to that. One, we've never been truly able to gauge how big the risk really is. So, for one person it maybe a 2, and most people it's probably a 5 or a 6. Some people may be sitting there at a 10. But, you need to be able to gauge the magnitude of the risk. And, we never have done a good job of saying what exactly the exposure is or if the actual event took place. It's the calculation of those two that tell you how much you should be able to invest in order to protect yourself.

We're starting to see a little bit of a sea change, because starting with HIPAA-HITECH in 2009, for the first time, regulatory bodies and legislatures have put criminal penalties on companies who have exposures and break-ins associated with them.

So we're no longer talking about ROI. We're starting to talk about risk of incarceration , and that changes the game a little bit. You're beginning to see more and more companies do more in the security space.

Mezzapelle: First of all we need to make sure that they have a comprehensive view. In some cases, it might be a portfolio approach, which is unique to most people in a security area. Some of my enterprise customers have more than a 150 different security products that they're trying to integrate.

Their issue is around complexity, integration, and just knowing their environment -- what levels they are at, what they are protecting and not, and how does that tie to the business? Are you protecting the most important asset? Is it your intellectual property (IP)? Is it your secret sauce recipe? Is it your financial data? Is it your transactions being available 24/7?

It takes some discipline to go back to that InfoSec framework and make sure that you have that foundation in place, to make sure you're putting your investments in the right way.

... It's about empowering the business, and each business is going to be different. If you're talking about a Department of Defense (DoD) military implementation, that's going to be different than a manufacturing concern. So it's important that you balance the risk, the cost, and the usability to make sure it empowers the business.

Hietala: One of the big things that's changed that I've observed is if you go back a number of years, the sorts of cyber threats that were out there were curious teenagers and things like that. Today, you've got profit-motivated individuals who have perpetrated distributed denial of service attacks to extort money.

Now, they’ve gotten more sophisticated and are dropping Trojan horses on CFO's machines and they can to try in exfiltrate passwords and log-ins to the bank accounts.

We had a case that popped up in our newspaper in Colorado, where a mortgage company, a title company lost a million dollars worth of mortgage money that was loans in the process of funding. All of a sudden, five homeowners are faced with paying two mortgages, because there was no insurance against that.

When you read through the details of what happened it was, it was clearly a Trojan horse that had been put on this company's system. Somebody was able to walk off with a million dollars worth of these people's money.

State-sponsored acts

So you've got profit-motivated individuals on the one side, and you've also got some things happening from another part of the world that look like they're state-sponsored, grabbing corporate IP and defense industry and government sites. So, the motivation of the attackers has fundamentally changed and the threat really seems pretty pervasive at this point.

Complexity is a big part of the challenge, with changes like you have mentioned on the client side, with mobile devices gaining more power, more ability to access information and store information, and cloud. On the other side, we’ve got a lot more complexity in the IT environment, and much bigger challenges for the folks who are tasked for securing things.

Stikeleather: One other piece of it is require an increased amount of business knowledge on the part of the IT group and the security group to be able to make the assessment of where is my IP, which is my most valuable data, and what do I put the emphasis on.

One of the things that people get confused about is, depending upon which analyst report you read, most data is lost by insiders, most data is lost from external hacking, or most data is lost through email. It really depends. Most IP is lost through email and social media activities. Most data, based upon a recent Verizon study, is being lost by external break-ins.

When you move from just "I'm doing security" to "I'm doing risk mitigation and risk management," then you have to start doing portfolio and investment analysis in making those kinds of trade-offs.



We've kind of always have the one-size-fits-all mindset about security. When you move from just "I'm doing security" to "I'm doing risk mitigation and risk management," then you have to start doing portfolio and investment analysis in making those kinds of trade-offs.

... At the end of the day it's the incorporation of everything into enterprise architecture, because you can't bolt on security. It just doesn't work. That’s the situation we're in now. You have to think in terms of the framework of the information that the company is going to use, how it's going to use it, the value that’s associated with it, and that's the definition of EA.

... It's one of the reasons we have so much complexity in the environment, because every time something happens, we go out, we buy any tool to protect against that one thing, as opposed to trying to say, "Here are my staggered differences and here's how I'm going to protect what is important to me and accept the fact nothing is perfect and some things I'm going to lose."

Mezzapelle: It comes back to one of the bottom lines about empowering the business. It means that not only do the IT people need to know more about the business, but the business needs to start taking ownership for the security of their own assets, because they are the ones that are going to have to belay the loss, whether it's data, financial, or whatever.

We need to connect the dots and we need to have metrics. We need to look at it from an overall threat point of view, and it will be different based on what company you're about.



They need to really understand what that means, but we as IT professionals need to be able to explain what that means, because it's not common sense. We need to connect the dots and we need to have metrics. We need to look at it from an overall threat point of view, and it will be different based on what company you're about.

You need to have your own threat model, who you think the major actors would be and how you prioritize your money, because it's an unending bucket that you can pour money into. You need to prioritize.

The way that we've done that is this is we've had a multi-pronged approach. We communicate and educate the software developers, so that they start taking ownership for security in their software products, and that we make sure that that gets integrated into every part of portfolio.

The other part is to have that reference architecture, so that there’s common services that are available to the other services as they are being delivered and that we can not control it but at least manage from a central place.

Stikeleather: The starting point is really architecture. We're actually at a tipping point in the security space, and it comes from what's taking place in the legal and regulatory environments with more-and-more laws being applied to privacy, IP, jurisdictional data location, and a whole series of things that the regulators and the lawyers are putting on us.

One of the things I ask people, when we talk to them, is what is the one application everybody in the world, every company in the world has outsourced. They think about it for a minute, and they all go payroll. Nobody does their own payroll any more. Even the largest companies don't do their own payroll. It's not because it's difficult to run payroll. It's because you can’t afford all of the lawyers and accountants necessary to keep up with all of the jurisdictional rules and regulations for every place that you operate in.

Data itself is beginning to fall under those types of constraints. In a lot of cases, it's medical data. For example, Massachusetts just passed a major privacy law. PCI is being extended to anybody who takes credit cards.

Because all these adjacencies are coming together, it's a good opportunity to sit down and architect with a risk management framework. How am I going to deal with all of this information?



The security issue is now also a data governance and compliance issue as well. So, because all these adjacencies are coming together, it's a good opportunity to sit down and architect with a risk management framework. How am I going to deal with all of this information?

Risk management

Hietala: I go back to the risk management issue. That's something that I think organizations frequently miss. There tends to be a lot of tactical security spending based upon the latest widget, the latest perceived threat -- buy something, implement it, and solve the problem.

Taking a step back from that and really understanding what the risks are to your business, what the impacts of bad things happening are really, is doing a proper risk analysis. Risk assessment is what ought to drive decision-making around security. That's a fundamental thing that gets lost a lot in organizations that are trying to grapple the security problems.

Stikeleather: I can argue both sides of the [cloud security] equation. On one side, I've argued that cloud can be much more secure. If you think about it, and I will pick on Google, Google can expend a lot more on security than any other company in the world, probably more than the federal government will spend on security. The amount of investment does not necessarily tie to a quality of investment, but one would hope that they will have a more secure environment than a regular company will have.

You have to do your due diligence, like with everything else in the world. I believe, as we move forward, cloud is going to give us an opportunity to reinvent how we do security.



On the flip side, there are more tantalizing targets. Therefore they're going to draw more sophisticated attacks. I've also argued that you have statistical probability of break-in. If somebody is trying to break into Google, and you're own Google running Google Apps or something like that, the probability of them getting your specific information is much less than if they attack XYZ enterprise. If they break in there, they are going to get your stuff.

Recently I was meeting with a lot of NASA CIOs and they think that the cloud is actually probably a little bit more secure than what they can do individually. On the other side of the coin it depends on the vendor. You have to do your due diligence, like with everything else in the world. I believe, as we move forward, cloud is going to give us an opportunity to reinvent how we do security.

I've often argued that a lot of what we are doing in security today is fighting the last war, as opposed to fighting the current war. Cloud is going to introduce some new techniques and new capabilities. You'll see more systemic approaches, because somebody like Google can't afford to put in 150 different types of security. They will put one more integrated. They will put in, to Mary Ann’s point, the control panels and everything that we haven't seen before.

So, you'll see better security there. However, in the interim, a lot of the software-as-a-service (SaaS) providers, some of the simpler platform-as-a-service (PaaS) providers haven’t made that kind of investment. You're probably not as secured in those environments.

Lowers the barrier

Mezzapelle: For the small and medium size business cloud computing offers the opportunity to be more secure, because they don't necessarily have the maturity of processes and tools to be able to address those kinds of things. So, it lowers that barrier to entry for being secure.

For enterprise customers, cloud solutions need to develop and mature more. They may want to do with hybrid solution right now, where they have more control and the ability to audit and to have more influence over things in specialized contracts, which are not usually the business model for cloud providers.

I would disagree with Jim Stikeleather in some aspects. Just because there is a large provider on the Internet that’s creating a cloud service, security may not have been the key guiding principle in developing a low-cost or free product. So, size doesn't always mean secure.

You have to know about it, and that's where the sophistication of the business user comes in, because cloud is being bought by the business user, not by the IT people. That's another component that we need to make sure gets incorporated into the thinking.

Stikeleather: I am going to reinforce what Mary Ann said. What's going on in cloud space is almost a recreation of the late '70s and early '80s when PCs came into organizations. It's the businesspeople that are acquiring the cloud services and again reinforces the concept of governance and education. They need to know what is it that they're buying.

There will be some new work coming out over the next few months that lay out some of the tough issues there and present some approaches to those problems.



I absolutely agree with Mary. I didn't mean to imply size means more security, but I do think that the expectation, especially for small and medium size businesses, is they will get a more secure environment than they can produce for themselves.

Hietala: There are a number of different groups within The Open Group doing work to ensure better security in various areas. The Jericho Forum is tackling identity issues as it relates to cloud computing. There will be some new work coming out of them over the next few months that lay out some of the tough issues there and present some approaches to those problems.

We also have the Open Trusted Technology Forum (OTTF) and the Trusted Technology Provider Framework (TTPF) that are being announced here at this conference. They're looking at supply chain issues related to IT hardware and software products at the vendor level. It's very much an industry-driven initiative and will benefit government buyers, as well as large enterprises, in terms of providing some assurance of products they're procuring are secure and good commercial products.

Also in the Security Forum, we have a lot of work going on in security architecture and information security management. There are a number projects that are aimed at practitioners, providing them the guidance they need to do a better job of securing, whether it's a traditional enterprise, IT environment, cloud and so forth. Our Cloud Computing Work Group is doing work on a cloud security reference architecture. So, there are number of different security activities going on in The Open Group related to all this.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Tuesday, February 15, 2011

HP offers framework for one-stop data center transformation

As more companies look toward building or expanding data centers, HP has announced today a comprehensive service that simplifies the process of designing and building data centers by offering design, construction and project management from a single vendor.

The new HP Critical Facilities Implementation service (CFI) enables clients to realize faster time-to-innovation and lower cost of ownership by providing a single integrator that delivers all the elements of a data center design-build project from start to finish. An extension of the HP Converged Infrastructure strategy, HP CFI is an architectural blueprint that allows clients to align and share pools of interoperable resources. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

A recent Gartner survey indicated that 46 percent of respondents reported that they will build one or more new data centers in the next two years, and 54 percent expected that they will need to expand an existing data center in that time frame.

“Constructing a data center is an enormous undertaking for any business, and taking an integrated approach with a single vendor will help maximize cost and efficiency, while reducing headaches,” said Dave Cappuccio, research vice president, Gartner. “As customers’ data center computing requirements add complexity to the design-build process, comprehensive solutions that provide clients with an end-to-end experience will allow them to realize their plans within the required timeframe and constraints.”

Extensive experience

Based on its experience in “greenfield” and retrofit construction, HP is delivering CFI for increased efficiency when designing and building data centers. The company draws on its experience in designing more than 50 million square feet of raised-floor data center space and its innovations in design engineering to create fully integrated facility and IT solutions.

Benefits of CFI include:
  • HP’s management of all of the elements of the design-build project and vision of integrating facilities development with IT strategy.
  • A customized data center implementation plan that is scalable and flexible enough to accommodate their existing and future data center needs.
  • Access to experience based on a track record of delivering successful customer projects around data center planning and design-build. These projects include the world’s first LEED-certified data center, the first LEED GOLD-certified data center, India’s first Uptime Institute Tier III-rated data center as well as more than 60 “greenfield” sites, including 100-megawatt facilities.
HP CFI is available through HP Critical Facilities Services. Pricing varies according to location and implementation. More information is available at www.hp.com/services/cfi.

You may also be interested in:

Adaptive Computing beefs up data center and private cloud management with Moab 6.0

Adaptive Computing has announced a major upgrade to its data-center and private-cloud management solution. The new version addresses growing enterprise demand to quickly deploy infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) or software-as-a-service (SaaS) from a centralized and intelligent data center infrastructure.

The Provo, Utah company's Moab Adaptive Computing Suite 6.0 provides increased agility and automation to decrease the time to value for new cloud services, reduce IT cost and complexity and optimize overall cloud resource utilization.

New capabilities in Moab enhance its ability to automate the process of identifying and allocating available resources based on business policies, compliance, cost and performance goals. Ensuring that applications and infrastructure are automatically deployed properly the first time, avoids service failures, optimizes resource usage, and reduces cost, the company said.

“Cloud solutions ultimately need to evolve in complexity from rapid provisioning to an automated, self-optimizing environment,” said Michael Jackson, president and COO of Adaptive Computing. “With Moab Adaptive Computing Suite 6.0, enterprises can create an optimized cloud environment that has the agility to respond to business requests via sophisticated automation. Innovative partners such as HP recognize this need for intelligent resource management and rely on Moab technology to help their customers further extend the value and return on investment of their cloud infrastructures.” [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Self-service projects

Moab Adaptive Computing Suite 6.0 enables organizations to evolve self-service cloud projects into rich, self-optimizing or intelligent workload-driven clouds. Its policy-based cloud intelligence engine works in tandem with existing data center management and resource investments to create an agile cloud environment that responds faster to business requests and automates across IT processes.

Product enhancements include:
  • Automatically understanding and managing a mix of application workloads. In addition, the solution automatically initiates live migration of virtual machine workloads to meet service needs and maximize utilization.
  • Rapidly automating resource allocation and provisioning and de-provisioning hardware resources via policies to reduce IT cost and management complexity.
  • Providing billing or “showback” of resource usage costs to ensure that services are delivered within standards and compliance requirements.
  • Aggregating data for rich context to automate initial resource allocation decisions and policy-based actions through integration with other resource managers, data repositories, and identity management systems.
You may also be interested in: