Tuesday, July 24, 2012

Summer in the Capital -– Looking back at The Open Group Conference in Washington, D.C.

This guest post comes courtesy of Jim Hietala, Vice President of Security at The Open Group.

By Jim Hietala

This past week in Washington D.C., The Open Group held our Q3 conference. The theme for the event was "Cybersecurity – Defend Critical Assets and Secure the Global Supply Chain," and the conference featured a number of thought-provoking speakers and presentations.

Cybersecurity is at a critical juncture, and conference speakers highlighted the threat and attack reality and described industry efforts to move forward in important areas. The conference also featured a new capability, as several of the events were livestreamed to the Internet.

For those who did not make the event, here's a summary of a few of the key presentations, as well as what The Open Group is doing in these areas. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Joel Brenner, attorney with Cooley, was our first keynote. Joel's presentation was titled, “Turning Us Inside-Out: Crime and Economic Espionage on our Networks.” The talk mirrored his recent book, “America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare,” and Joel talked about current threats to critical infrastructure, attack trends, and challenges in securing information.

Joel's presentation was a wakeup call to the very real issues of IP theft and identity theft. Beyond describing the threat and attack landscape, Joel discussed some of the management challenges related to ownership of the problem, namely that the different stakeholders in addressing cybersecurity in companies, including legal, technical, management, and HR, all tend to think that this is someone else's problem. Joel stated the need for policy spanning the entire organization to fully address the problem.

The DoD now requires the creation of a program protection plan, which is the single focal point for security activities on the program.



Kristin Baldwin
, principal deputy, systems engineering, Office of the Assistant Secretary of Defense, Research and Engineering, described the U.S. Department of Defense (DoD) Trusted Defense Systems Strategy and challenges, including requirements to secure their multi-tiered supply chain. She also talked about how the acquisition landscape has changed over the past few years.

In addition, for all programs, the DoD now requires the creation of a program protection plan, which is the single focal point for security activities on the program. Kristin's takeaways included needing a holistic approach to security, focusing attention on the threat, and avoiding risk exposure from gaps and seams.

Overarching framework

DoD’s Trusted Defense Systems Strategy provides an overarching framework for trusted systems. Stakeholder integration with acquisition, intelligence, engineering, industry, and research communities is key to success. Systems engineering brings these stakeholders, risk trades, policy, and design decisions together. Kristin also stressed the importance of informing leadership early and providing programs with risk-based options.

Dr. Don Ross of NIST presented a perfect storm of proliferation of information systems and networks and an increasing sophistication of threat, resulting in an increasing number of penetrations of information systems in the public and private sectors potentially affecting security and privacy. He proposed a need for an integrated project team approach to information security.

Dr. Ross also provided an overview of the changes coming in NIST SP 800-53, version 4, which is presently available in draft form. He also advocated a dual protection strategy approach involving traditional controls at network perimeters that assumes attackers outside of organizational networks, as well as agile defenses, are already inside the perimeter.

The objective of agile defenses is to enable operation while under attack and to minimize response times to ongoing attacks.

The objective of agile defenses is to enable operation while under attack and to minimize response times to ongoing attacks. This new approach mirrors thinking from the Jericho Forum and others on de-perimeterization and security and is very welcome.

The Open Group Trusted Technology Forum provided a panel discussion on supply chain security issues and the approach that the forum is taking towards addressing issues relating to taint and counterfeit in products.

The panel included Andras Szakal of IBM, Edna Conway of Cisco and Dan Reddy of EMC, as well as Dave Lounsbury, CTO of The Open Group. OTTF continues to make great progress in the area of supply chain security, having published a snapshot of the Open Trusted Technology Provider Framework, working to create a conformance program, and in working to harmonize with other standards activities.

Dave Hornford, partner at Conexiam and chair of The Open Group Architecture Forum, provided a thought provoking presentation titled, "Secure Business Architecture, or just Security Architecture?" Dave's talk described the problems in approaches that are purely focused on securing against threats and brought forth the idea that focusing on secure business architecture was a better methodology for ensuring that stakeholders had visibility into risks and benefits.

Positive and negative

Geoff Besko, CEO of Seccuris and co-leader of the security integration project for the next version of TOGAF, delivered a presentation that looked at risk from a positive and negative view. He recognized that senior management frequently have a view of risk embracing as taking risk with am eye on business gains if revenue/market share/profitability, while security practitioners tend to focus on risk as something that is to be mitigated. Finding common ground is key here.

Katie Lewin, who is responsible for the GSA FedRAMP program, provided an overview of the program, and how it is helping raise the bar for federal agency use of secure cloud computing.

The conference also featured a workshop on security automation, which featured presentations on a number of standards efforts in this area, including on SCAP, O-ACEML from The Open Group, MILE, NEA, AVOS and SACM. One conclusion from the workshop was that there's presently a gap and a need for a higher level security automation architecture encompassing the many lower level protocols and standards that exist in the security automation area.

There's presently a gap and a need for a higher level security automation architecture encompassing the many lower level protocols and standards that exist in the security automation area.



In addition to the public conference, a number of forums of The Open Group met in working sessions to advance their work in the Capitol. These included:
All in all, the conference clarified the magnitude of the cybersecurity threat, and the importance of initiatives from The Open Group and elsewhere to make progress on real solutions.

Join us at our next conference in Barcelona on October 22-25!

This guest post comes courtesy of Jim Hietala, Vice President of Security at The Open Group. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Monday, July 23, 2012

With CMS 10, HP puts workload configuration data newly in hands of those who can best use it to manage services delivery

HP today introduced HP Configuration Management System (CMS) 10, a broad update designed to give more types of IT leaders better insight and control over everything from discrete IT devices to complete services-enabled business processes.

Especially important for the operational control of hybrid services delivery and converged cloud implementations, CMS 10 gathers and shares the configuration patterns and characteristics of highly virtualized workloads. The update helps manage dynamic virtualized applications both inside enterprise data centers as well as leading clouds.

"CMS 10 improves control of converged clouds," said Jimmy Augustine, product marketing manager at HP Software. "It sees the virtual machines and updates the Universal Configuration Management Data Base (UCMDB) with the dynamic information from public and private clouds."

With the new software, HP says clients can reduce costs and risks associated with service disruptions while reducing the time spent on manual discovery by more than 50 percent thanks to automated discovery capabilities. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

With the growing adoption of cloud computing, organizations are under increased pressure to deliver new services and scale existing ones. The complexities of cloud-based infrastructures coupled with a lack of visibility have hampered organizations’ ability to efficiently and predictably manage IT performance.

“Service disruptions within complex cloud and virtualized environments are difficult to identify and resolve,” said Shane Pearson, vice president, Product Marketing, Operations, Software, HP. “With the new enhancements to HP Configuration Management System, IT executives now have the configuration intelligence they need at their fingertips to make rapid decisions to ensure consistent business service availability.”

CMS 10 also introduces new capabilities specifically for service lifecycle design and operations, notably within both business service management (BSM) and IT service management (ITSM).

CMS 10 also introduces new capabilities specifically for service lifecycle design and operations.



I was especially impressed by the ability of CMS 10 users to extend the view of operations to business process analysts, enterprise architects and DevOps managers -- all provided by a new browser-based access and query capability. These business-function-focused leaders can seek out the information they need to cut through the complexity of systems data to measure and react to how an entire application or processes are behaving systemically.

What's more, CMS 10 level insights can be extended to security professionals and business architects to gather data on compliance, performance, and even for better architecting the next process or hybrid services mix. The fact that CMS 10 already supports across many VMs and cloud types shows the importance of ensuring configuration conformity as a baseline capability for hybrid cloud uses.

The CMS update broadly supports virtual machines better, has multi-tenancy support to appeal to service providers, and delivers its outputs via web browsers and search interfaces. "You can see the full applications support infrastructure, and discover out of the box the whole workload support," says Augustine.

More specifically, the new HP CMS 10 includes HP Universal Discovery with Content Pack 11, HP Universal Configuration Management Data Base (UCMDB), HP UCMDB Configuration Manager, and HP UCMDB Browser. With the new solution, enterprises, governments and managed service providers (MSPs) can now:
  • Quickly discover software and hardware inventory, as well as associated dependencies in a single unified discovery solution

    You can see the full applications support infrastructure, and discover out of the box.


  • Speed time to value with the product’s simplified user interface and enhanced scalability, allowing all IT teams to consume as well as use rich intelligence hosted in the HP CMS
  • More easily manage multiple client environments within a single UCMDB with improved security, automation and scalability
  • Automatically locate and catalog new technologies related to network hardware, open source middleware, storage, ERP, and infrastructure software providers
  • Introduce new server compliance thresholds.
HP CMS 10 is a key component of the HP IT Performance Suite, an enterprise performance software platform designed to improve performance with operational intelligence for many types of users and uses.

HP CMS, currently available worldwide in 10 languages, is also available through HP channel partners. More information about CMS 10 is available at www.hp.com/go/CMS.

You may also be interested in:

Wednesday, July 18, 2012

User behavior data open to misuse without privacy and identification standards, says Open Group tweet jam community

The uncharted territory of user behavior data based on what users do in such web walled gardens as Facebook was the focus of a "tweet jam" last week organized by The Open Group.

Some of the many notable participants in the tweet jam around the hash tag #ogChat on July 11 worried about the prospect of misuse of the user identity and behavior data, but were more mixed about what to do about it. I was the moderator of the tweet jam. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

With hundreds of tweets flying at break-neck pace, yesterday's #ogChat saw a very spirited discussion on the Internet's movement toward a walled garden model. In case you missed the conversation, you're now in luck! Here's a re-cap of yesterday's #ogChat:

The full list of participants included:
Here is a high-level a snapshot of yesterday's #ogChat:

Shift from open Internet

Q1 In the context of #WWW, why has there been a shift from the open Internet to portals, apps and walled environs? #ogChat

Participants generally agreed that the impetus behind the walled garden trend was led by two factors: companies and developers wanting more control, and a desire by users to feel "safer."

  • @charleneli: Q1 Peeps & developers like order, structure, certainty. Control can provide that. But too much and they leave. #ogChat.
  • @Technodad: User info & contributions are raw material of walled sites-"If you're not paying for the service, the product being sold is you". #ogChat
  • @Dana_Gardner: @JohnFontana What about the meta data that they can own by registering you? #ogChat

    If you're not paying for the service, the product being sold is you.



    • In response to: @JohnFontana Q1 Eyeballs proved worthless; souls can make you some real money. #ogChat

    • @charleneli: @Dana_Gardner re: Meta data -- once you join a community, there has to be a level of trust. If they respect data, people will trust. #ogChat
  • @AlanWebber #ogChat Q1 - People feel safer inside the "Walls" but don't realize what they are losing
Privacy/control

Q2 How has this trend affected privacy/control? Do users have enough control over their IDs/content within #walledgarden networks? #ogChat


This was a hot topic as participants debated the tradeoffs between great content and privacy controls. Questions of where data was used and leaked to also emerged, as walled gardens are known to have backdoors.
  • @AlanWebber: But do people understand what they are giving up inside the walls? #ogChat
  • @TheTonyBradley: Q2 -- Yes and no. Users have more control than they're aware of, but for many its too complex and cumbersome to manage properly.#ogchat
  • @jim_hietala: #ogChat Q2 privacy and control trade offs need to be made more obvious, visible

    Users have more control than they're aware of, but for many its too complex and cumbersome to manage properly.


  • @zdFYRashid: Q2 users assume that #walledgarden means nothing leaves, so they think privacy is implied. They don't realize that isn't the case#ogchat
  • @JohnFontana: Q2 Notion is wall and gate is at the front of garden where users enter. It's the back that is open and leaking their data #ogchat
  • @subreyes94: #ogchat .@DanaGardner More walls coming down through integration. FB and Twitter are becoming de facto login credentials for other sites
Social and mobile

Q3 What has been the role of social and #mobile in developing #walledgardens? Have they accelerated this trend? #ogChat


Everyone agreed that social and mobile catalyzed the formation of walled garden networks. Many also gave a nod to location as a nascent driver.
  • @jaycross: Q3 Mobile adds your location to potential violations of privacy. It's like being under surveillance. Not very far along yet. #ogChat
  • @charleneli: Q3: Mobile apps make it easier to access, reinforcing behavior. But also enables new connections a la Zynga that can escape #ogChat

    Mobile apps make it easier to access, reinforcing behavior.


  • @subreyes94: #ogChatQ3 They have accelerated the always-inside the club. The walls have risen to keep info inside not keep people out.

    • @Technodad: @subreyes94 Humans are social, want to belong to community & be in touch with others "in the group". Will pay admission fee of info. #ogChat

Current web

Q4 Can people use the internet today without joining a walled garden network? What does this say about the current web? #ogChat


There were a lot of parallels drawn between real and virtual worlds. It was interesting to see that walled gardens provided a sense of exclusivity that human seek out by nature. It was also interesting to see a generational gap emerge as many participants cited their parents as not being a part of a walled garden network.
  • @TheTonyBradley: Q4 -- You can, the question is "would you want to?" You can still shop Amazon or get directions from Mapquest. #ogchat
  • @zdFYRashid: Q4 people can use the internet without joining a walled garden, but they don't want to play where no one is. #ogchat

    We are headed to a time when people will buy back their anonymity.


  • @JohnFontana: Q4 I believe we are headed to a time when people will buy back their anonymity. That is the next social biz. #ogchat
Owning information

Q5 Is there any way to reconcile the ideals of the early web with the need for companies to own information about users? #ogChat


While walled gardens have started to emerge, the consumerization of the Internet and social media has really driven user participation and empowered users to create content within these walled gardens.
  • @JohnFontana: Q5 - It is going to take identity, personal data lockers, etc. to reconcile the two. Wall-garden greed heads can't police themselves#ogchat
  • @charleneli:Q5: Early Web optimism was less about being open more about participation. B4 you needed to know HTML. Now it's fill in a box. #ogChat

    It is going to take identity, personal data lockers, etc. to reconcile the two.


  • @Dana_Gardner: Q5 Early web was more a one-way street, info to a user. Now it's a mix-master of social goo. No one knows what the goo is, tho. #ogChat
  • @AlanWebber: Q5, Once there are too many walls, people will begin to look on to the next (virtual) world. Happening already #ogChat
Next iteration

Q6 What #Web2.0 lessons learned should be implemented into the next iteration of the web? How to fix this? #ogChat


Identity was the most common topic with the sixth and final question. Single sign-on, personal identities on mobile phones/passports and privacy seemed to be the biggest issues facing the next iteration of the web.
  • @Technodad: Q6 Common identity is a key - need portable, mutually-recognized IDs that can be used for access control of shared info. #ogChat
  • @JohnFontana: Q6 Users want to be digital. Give them ways to do that safely and privately if so desired. #ogChat

    We need portable, mutually-recognized IDs that can be used for access control of shared info.


  • @TheTonyBradley: Q6 -- Single ID has pros and cons. Convenient to login everywhere with FB credentials, but also a security Achilles heel.#ogchat

Thank you to all the participants who made this such a great discussion!

Incidentally, the model of a tweet jam or tweet up on IT subjects of interest is a great way to gather insights and make a social splash too. This #ogChat was a top tracking subject under Twitter during and after the online event. I'd be happy to do more of these as a moderator or participant on a subject near and dear to you and your community.

You may also be interested in:

Counting the cost of cloud

This guest post comes courtesy of Chris Harding, Forum Director for SOA and Semantic Interoperability at The Open Group.

By Chris Harding

IT costs were always a worry, but only an occasional one. Cloud computing has changed that.

Here's how it used to be. The New System was proposed. Costs were estimated, more or less accurately, for computing resources, staff increases, maintenance contracts, consultants and outsourcing. The battle was fought, the New System was approved, the checks were signed, and everyone could forget about costs for a while and concentrate on other issues, such as making the New System actually work.

One of the essential characteristics of cloud computing is "measured service." Resource usage is measured by the byte transmitted, the byte stored, and the millisecond of processing time. Charges are broken down by the hour, and billed by the month. This can change the way people take decisions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

"The New System is really popular. It's being used much more than expected."

"Hey, that's great!"

One of the essential characteristics of cloud computing is "measured service."



Then, you might then have heard,

"But this means we are running out of capacity. Performance is degrading. Users are starting to complain."

"There's no budget for an upgrade. The users will have to lump it."


Now the conversation goes down a slightly different path.

"Our monthly compute costs are twice what we budgeted."

"We can't afford that. You must do something!"


Possible and necessary

And something will be done, either to tune the running of the system, or to pass the costs on to the users. Cloud computing is making professional day-to-day cost control of IT resource use both possible and necessary.

This starts at the planning stage. For a new cloud system, estimates should include models of how costs and revenue relate to usage. Approval is then based on an understanding of the returns on investment in likely usage scenarios. And the models form the basis of day-to-day cost control during the system's life.

Last year's Open Group “State of the Industry” cloud survey found that 55 percent of respondents thought that cloud return on investment (ROI) addressing business requirements in their organizations would be easy to evaluate and justify, but only 35 percent of respondents' organizations had mechanisms in place to do this. Clearly, the need for cost control based on an understanding of the return was not widely appreciated in the industry at that time.

For a new cloud system, estimates should include models of how costs and revenue relate to usage.



We are repeating the survey this year. It will be very interesting to see whether the picture has changed.

Participation in the survey is still open. To add your experience and help improve industry understanding of the use of cloud computing, visit: http://www.surveymonkey.com/s/TheOpenGroup_2012CloudROI

This guest post comes courtesy of Chris Harding, Forum Director for SOA and Semantic Interoperability at The Open Group. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Tuesday, July 17, 2012

Where cloud computing takes us: Hybrid services delivery of essential information across all types of applications

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance podcast series brings together two top cloud evangelists from the recent HP Discover 2012 Conference to discuss the specific concepts around converged cloud, information clouds, and hybrid services delivery.

We’re joined by Paul Muller, the Chief Software Evangelist at HP, and Christian Verstraete, Chief Technologist for Cloud Strategy at HP. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: You’ve separated the notion of hybrid computing and hybrid delivery. Can you help me understand better why they're different, and what HP means by hybrid delivery?

Verstraete: Hybrid computing typically is combining private and public clouds. We feel that many of our customers still have a traditional environment, and that traditional environment will not go away anytime soon. However, they're actually looking at combining that traditional environment, the data that’s in that traditional environment and some of the functionality that's out there, with the public cloud and the private cloud.

The whole concept of hybrid delivery is tying that together. It goes beyond hybrid computing or hybrid cloud. It adds the whole dimension of the traditional environment. And, to our mind, the traditional environment isn't going to go away anytime soon.

Gardner: Paul, how has the traditional understanding of cloud computing as segments of infrastructure services changed?

Muller: From that perspective, the converged cloud is really about three things for us. The first is having greater levels of choice. The key point that Christian just made is that you can't afford to live in the world of, "It’s just public; it's just private; or I can ignore my traditional investments and infrastructure." Choice is critical, choice in terms of platform and application.

The second thing, though, is that in order to get great choices, you need consistency as an underlying platform to ensure that you're able to scale your people, your processes, and more importantly, your investments across those different environments.

Consistent confidence


T
he last one is probably the biggest area of passion for me -- confidence. We spoke a little bit earlier about how so many clients, as they move to cloud, are concerned about the arm’s-length relationship they have with that provider. How can I get back the confidence in security and service levels, and make sure that that confidence is consistent across both my on-premises and-off premises environments?

Verstraete: People have started looking at cloud from pure infrastructure, reuse, and putting workflows in some particular places in infrastructure. The world is moving beyond that at the moment. On one end, you have software as a service (SaaS) starting to play and getting integrated in a complete cloud environment and a complete cloud function.

We also have to realize that, in 2011, the world created about 1.8 zettabytes of data, and that data has a heck of a lot of information that enterprises actually need. And as enterprises understand what they can get out of the data, they want that data right there at their fingertips. What makes it even more interesting is that 90 percent of that data is unstructured.

We've been working for the last 30 years with structured data. We know all about databases and everything, but we have no clue about unstructured data. How do I know the sentiments that people have compared to my brand, my business, my product? That's the sort of question that's becoming important, because if you want to do warranty management or anything else, you want to understand how your users feel. Hence, the importance of all of this data.

We know all about databases and everything, but we have no clue about unstructured data.



Muller: I’d add something else. We were here with the Customer Advisory Board. We had a pre-meeting prior to the actual conference, and one of them said something I thought was kind of interesting, remarkable actually.

He said, "If I think back 30 years, my chief concern was making sure the infrastructure was functioning as we expected it to. As I moved forward, my focus was on differentiating applications." He said, "Now that I'm moving more and more of the first two into the cloud, my focus really needs to be on harnessing the information and insight. That’s got to become the core competency and priority of my team."

Verstraete: There's one element to add, and that is the end-user. When you start talking about converged clouds -- we're not there yet, but we're getting there -- it's really about having one, single user experience. Your end-user doesn't need to know that this function runs in a public cloud, that function runs in a private cloud, or that function runs in the traditional environment.

No. He just wants to get there and use whatever it is. It's up to IT to define where they put it, but he or she just wants to have to go one way, with one approach -- and that's where you get this concept of a unique user experience. In converged cloud that’s absolutely critical.

Composite hybrids

Gardner: Another term that was a bit fresh for me here was this notion of composite hybrid applications. This was brought up by Biri Singh in his discussion. It sounds as if more and more combinations of SaaS, on-premises, virtualized, physical, and applications need to come together. In addition to that, we're going to be seeing systems of record moving to some variety of cloud or combination of cloud resources.

The question then is how can we get to the data within all of those applications to create those business processes that need to cut across them? Is that what you're talking about with Autonomy and IDOL? Is that the capability we are really moving toward, combining data and information from a variety of sources, but in a productive and useful way?

Verstraete: Absolutely. You got it spot on, Dana. It's really about using all of the information sources that you have. It's using your own private information sources, but combining them with the public information sources. Don’t forget about those. Out of that, it's gathering the information that's relevant to the particular thing that you're trying to achieve, be it compliance, understanding how people think about you, or anything else.

The result is one piece of information, but it may come from multiple sources, and you need an environment that pulls all of that data and gets at that data in a useful form, so you can start doing the analysis and then portraying the information, as you said, in a way that is useful for you. That's what IDOL and Autonomy does for us in this environment.

Muller: This has to be not yesterday, not today, but in real-time. One of the critical elements to that is being able to access that information in real-time. All of us are active in social media, and that literally reflects your customer’s attitudes from minute to minute.

One of the critical elements to that is being able to access that information in real time.



Let me give you a use-case of how the two come together. Imagine that you have a customer on a phone call with a customer service operator. You could use Autonomy technology to detect, for example, the sound of their voice, which indicates that they're stressed or that they're not happy.

You can flag that and then very quickly go out to your real-time structured systems and ask, "How much of an investment has this client made in us? Are they are high net worth customer to us or are they a first-time transactor? Are they active in the social media environment? What are they saying about us right now?"

If the pattern is one that may be disadvantageous to the company, you can flag that very quickly and say, "We want to escalate this really quickly to a manager to take control of the situation, because maybe that particular customer service rep needs some coaching or needs some help." Again, not in a week’s time, not in a month’s time, but right there, right now. That’s a really important point.

Gardner: This is a good vision, but if I am a developer, a business analyst, or a leader in a company and I want a dashboard that gets me this information, how do we take this fire hose of information and make it manageable and actionable?

Verstraete: There are two different elements in this. The first thing is that we’re using IDOL 10, which is basically the combination, on one hand, of Autonomy and, on the other hand, of Vertica. Autonomy is for unstructured data, and Vertica for structured data, so you get the two coming together.

We’re using that as the backbone for gathering and analyzing the whole of that information. We've made available to developers a number of APIs, so that they can tap into this in real-time, as Paul said, and then start using that information and doing whatever they want with it.

Obviously, Autonomy and Vertica will give you the appropriate information, the sentiment, and the human information, as we talked about. Now, it's up to you to decide what you want to do with that, what you want to do with the signals that you receive. And that's what the developer can do in real-time, at the moment.

The great challenge is not lack of data or information, but it's the sheer volume.



Gardner: Paul, any thoughts in making this fire hose of data actionable?

Muller: Just one simple thought, which is meaning. The great challenge is not lack of data or information, but it's the sheer volume as you pointed out, when a developer thinks about taking all of the information that's available. A simple Google query or a Bing query will yield hundreds, even millions of results. Type in the words "Great Lakes," and what are you going to get back? You'll get all sorts of information about lakes.

But if you’re looking, for example, for information about depth of lakes, where the lakes are, where are lakes with holiday destinations, it's the meaning of the query that's going to help you reduce that information and help you sort the wheat from the chaff. It's meaning that's going to help developers be more effective, and that's one of the reasons why we focus so heavily on that with IDOL 10.

Gardner: And just to quickly follow up on that, who decides the meaning? Is this the end user who can take action against this data, or does it have to go through IT and a developer and a business analyst? How close can we get to those people at an individual level so that they can ascertain the meaning and then act on it?

Muller: It's a brilliant question, because meaning in the old sense of the term -- assigning meaning is a better way of putting it -- was ascribed to the developer. Think about tagging a blog, for example. What is this blog about? Well, this blog might be about something as you’re writing it, but as time goes on, it might be seen as some sort of historic record of the sentiment of the times.

So it moves from being a statement of fact to a statement of sentiment. The meaning of the information will change, depending on its time, its purpose, and its use. You can't foresee it, you can't predict it, and you certainly can't entrust a human with the task of specifically documenting the meaning for each of those elements.

Appropriate meaning

What we focus on is allowing the information itself to ascribe its own meaning and the user to find the information that has the appropriate meaning at the time that they need it. That's the big difference.

Gardner: So the power of the cloud and the power of an engine like IDOL and Vertica brought to bear is to be bale to serve up the right information to the right person at the right time -- rather than them having to find it and know what they want.

Verstraete: Exactly, that's exactly what it is. With that information they can then start doing whatever they want to do in their particular application and what they want to deliver to their end-user. You’re absolutely spot-on with that.

Gardner: Let's go to a different concept around the HP Converged Cloud. It seems as if we’re moving toward a cloud of clouds. You don’t seem to want to put other public cloud providers out of business.

You seem to say, "Let them do what they do. We want to get in front of them and add value, so that those coming in through our [HP] cloud, and accessing their services vis-à-vis other clouds, can get better data and analysis, security, and perhaps even some other value-added services." Or am I reading this wrong?

Many customers don’t have the transparency to understand what is really happening, and with transparency comes trust.



Verstraete: No, you’re actually reading this right. One of the issues that you have with public clouds today isn't a question of whether public cloud is secure or not secure or whether it's compliant or not compliant. Many customers don’t have the transparency to understand what is really happening, and with transparency comes trust.

A lot of our customers tell us, "For certain particular workloads, we don’t really trust this or that cloud, because we don’t really know what they do. So give us a cloud or something that delivers the same type of functionality, but where I can understand what is done from a security perspective, a process perspective, a compliance perspective, an SLA perspective, and so on?

They ask: "Where can I have a proper contract, not these little Ts and Cs that I tick in the box? Where can I have the real proper contract and understand what I'm getting into, so that I can analyze my potential risk and decide what security I want to have, and what risk I'm prepared to take?"

Gardner: So the way in which I would interface with the HP managed services cloud of clouds would be through SLAs and key performance indicators (KPIs), and the language of business risk, rather than an engineer’s check list.

Muller: Absolutely, exactly right. That's the important point. Christian talks about this all the time. It’s not about cloud; it’s about the services, and it’s about describing those services in terms of what a businessperson can understand. What am I going to get, what cost, at what quality, at what time, at what level of risk and security? And can I find the right solution at the right time?

Wisdom of the crowds

Gardner: You've been talking with CIOs and leaders within business. Christian, first with you, does anything jump out as interesting from the marketplace that perhaps you didn’t anticipate? Where are they interested most in this notion of the HP Converged Cloud?

Verstraete: A lot of customers, at least the ones that I talk to, are interested in how they can start taking advantage of this whole brand-new way with existing applications. A number of them are not ready to say, "I'm going to ditch what I have, and I am going to do something else." They just say, "I'm confident with and comfortable with this, but can I take advantage of this new functionality, this new environment? How do I transform my applications to be in this type of a world?" That's one of the elements that I keep hearing quite a lot.

A lot of customers are interested in how they can start taking advantage of this whole brand-new way with existing applications.



Gardner: So a crawl-walk-run, a transition, a journey. This isn’t a switch you flip; this is really a progression.

Verstraete: That is why the presence of the traditional environment, as we said at the beginning, is so important. You don’t take the 3,000 applications you have, plug them around, they all work, and you forget about a traditional environment. That's not how it works. It's really that period to start moving, and to slowly but surely start taking the full advantage of what this converged cloud really delivers to you.

Gardner: Paul, what is that community here telling you about their interests in the cloud?

Muller: A number of things, but I think the primary one is just getting ahead of this consumerization trend and being able to treat the internal IT organization and almost transforming it into something that looks and feels like an external service provider.

So the simplicity, ease of consumption, transparency of cost, the choice, but also the confidence that comes from dealing with that sort of consumerized service, is there, whether it's bringing your own device or bringing your own service or combining it on- and off-premises together.

Verstraete: Chris Anderson in his HP Discover keynote said something that resonated quite a lot with me. If you, as a CIO, want to remain competitive, you'd better get quick, and you'd better start transforming and move. I very much believe that, and I think that's something that we need, that our CIOs actually need to understand.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Saturday, July 14, 2012

Here's how to better leverage TOGAF to deliver DoDAF capabilities and benefits

Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.

This guest post comes courtesy of Chris Armstrong, President of Armstrong Process Group, Inc.

By Chris Armstrong

In today’s environment of competing priorities and constrained resources, companies and government agencies are in even greater need to understand how to balance those priorities, leverage existing investments and align their critical resources to realize their business strategy. Sound appealing?

It turns out that this is the fundamental goal of establishing an Enterprise Architecture (EA) capability. In fact, we have seen some of our clients position EA as the Enterprise Decision Support capability – that is, providing an architecture-grounded, fact-based approach to making business and IT decisions.

Many government agencies and contractors have been playing the EA game for some time -- often in the context of mandatory compliance with architecture frameworks, such as the Federal Enterprise Architecture (FEA) and the Department of Defense Architecture Framework (DoDAF).

We’re seeing a new breed of organizations that are looking past contractual compliance and want to exploit the business transformation dimension of EA.



These frameworks often focus significantly on taxonomies and reference models that organizations are required to use when describing their current state and their vision of a future state. We’re seeing a new breed of organizations that are looking past contractual compliance and want to exploit the business transformation dimension of EA.

In the Department of Defense (DoD) world, this is in part due to the new “capability driven” aspect of DoDAF version 2.0, where an organization aligns its architecture to a set of capabilities that are relevant to its mission.

The addition of the Capability Viewpoint (CV) in DoDAF 2 enables organizations to describe their capability requirements and how their organization supports and delivers those capabilities. The CV also provides models for representing capability gaps and how new capabilities are going to be deployed over time and managed in the context of an overall capability portfolio.

Critical difference

Another critical difference in DoDAF 2 is the principle of “fit-for-purpose,” which allows organizations to select which architecture viewpoints and models to develop based on mission/program requirements and organizational context. One fundamental consequence of this is that an organization is no longer required to create all the models for each DoDAF viewpoint. They are to select the models and viewpoints that are relevant to developing and deploying their new, evolved capabilities.

While DoDAF 2 does provide some brief guidance on how to build architecture descriptions and subsequently leverage them for capability deployment and management, many organizations are seeking a more well-defined set of techniques and methods based on industry standard best practices.

This is where the effectiveness of DoDAF 2 can be significantly enhanced by integrating it with The Open Group Architecture Framework (TOGAF) version 9.1, in particular the TOGAF Architecture Development Method (ADM). The ADM not only describes how to develop descriptions of the baseline and target architectures, but also provides considerable guidance on how to establish an EA capability and performing architecture roadmapping and migration planning.

TOGAF ADM describes how to drive the realization of the target architecture through integration with the systems engineering and solution delivery lifecycles.



Most important, the TOGAF ADM describes how to drive the realization of the target architecture through integration with the systems engineering and solution delivery lifecycles. Lastly, TOGAF describes how to sustain an EA capability through the operation of a governance framework to manage the evolution of the architecture. In a nutshell, DoDAF 2 provides a common vocabulary for architecture content, while TOGAF provides a common vocabulary for developing and using that content.

I hope that those of you in the Washington, D.C. area will join me at The Open Group Conference beginning July 16, where we’ll continue the discussion of how to deliver DoDAF capabilities using TOGAF. For those of you who can’t make it, I’m pleased to announce that The Open Group will also be delivering a livestream of my presentation (free of charge) on Monday, July 16 at 2:45 p.m. ET.

Hope to see you there!

This guest post comes courtesy of Chris Armstrong, President of Armstrong Process Group, Inc. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2012. All rights reserved.

Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.

You may also be interested in:

Friday, July 13, 2012

The Open Group Trusted Technology Forum is leading the way to securing global IT supply chains

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.
 
This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on enterprise architecture (EA), enterprise transformation, and securing global supply chains.

We're joined in advance by some of the main speakers at the conference to examine the latest efforts to make global supply chains for technology providers more secure, verified, and therefore trusted. We'll examine the advancement of The Open Group Trusted Technology Forum (OTTF) to gain an update on the effort's achievements, and to learn more about how technology suppliers and buyers can expect to benefit.

The expert panel consists of Dave Lounsbury, Chief Technical Officer at The Open Group; Dan Reddy, Senior Consultant Product Manager in the Product Security Office at EMC Corp.; Andras Szakal, Vice President and Chief Technology Officer at IBM's U.S. Federal Group, and also the Chair of the OTTF, and Edna Conway, Chief Security Strategist for Global Supply Chain at Cisco. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Why this is an important issue, and why is there a sense of urgency in the markets?

Lounsbury: The Open Group has a vision of boundaryless information flow, and that necessarily involves interoperability. But interoperability doesn't have the effect that you want, unless you can also trust the information that you're getting, as it flows through the system.

Therefore, it’s necessary that you be able to trust all of the links in the chain that you use to deliver your information. One thing that everybody who watches the news would acknowledge is that the threat landscape has changed. As systems become more and more interoperable, we get more and more attacks on the system.

As the value that flows through the system increases, there’s a lot more interest in cyber crime. Unfortunately, in our world, there's now the issue of state-sponsored incursions in cyberspace, whether officially state-sponsored or not, but politically motivated ones certainly.

So there is an increasing awareness on the part of government and industry that we must protect the supply chain, both through increasing technical security measures, which are handled in lots of places, and in making sure that the vendors and consumers of components in the supply chain are using proper methodologies to make sure that there are no vulnerabilities in their components.

I'll note that the demand we're hearing is increasingly for work on standards in security. That’s top of everybody's mind these days.

Reddy: One of the things that we're addressing is the supply chain item that was part of the Comprehensive National Cybersecurity Initiative (CNCI), which spans the work of two presidents. Initiative 11 was to develop a multi-pronged approach to global supply chain risk management. That really started the conversation, especially in the federal government as to how private industry and government should work together to address the risks there.

In the OTTF, we've tried create a clear measurable way to address supply-chain risk. It’s been really hard to even talk about supply chain risk, because you have to start with getting a common agreement about what the supply chain is, and then talk about how to deal with risk by following best practices.

Szakal: One of the observations that I've made over the last couple of years is that this group of individuals, who are now part of this standards forum, have grown in their ability to collaborate, define, and rise to the challenges, and work together to solve the problem.

Standards process

Technology supply chain security and integrity are not necessarily a set of requirements or an initiative that has been taken on by the standards committee or standards groups up to this point. The people who are participating in this aren't your traditional IT standards gurus. They had to learn the standards process. They had to understand how to approach the standardization of best practices, which is how we approach solving this problem.

It’s sharing information. It’s opening up across the industry to share best practices on how to secure the supply chain and how to ensure its overall integrity. Our goal has been to develop a framework of best practices and then ultimately take those codified best practices and instantiate them into a standard, which we can then assess providers against. It’s a big effort, but I think we’re making tremendous progress.

Gardner: Because The Open Group Conference is taking place in Washington, D.C., what’s the current perception in the U.S. Government about this in terms of its role?

Szakal: The government has always taken a prominent role, at least to help focus the attention of the industry.
The government has always taken a prominent role, at least to help focus the attention of the industry.


Now that they’ve corralled the industry and they’ve got us moving in the right direction, in many ways, we’ve fought through many of the intricate complex technology supply chain issues and we’re ahead of some of the thinking of folks outside of this group because the industry lives these challenges and understands the state of the art. Some of the best minds in the industry are focused on this, and we’ve applied some significant internal resources across our membership to work on this challenge.

So the government is very interested in it. We’ve had collaborations all the way from the White House across the Department of Defense (DoD) and within the Department of Homeland Security (DHS), and we have members from the government space in NASA and DoD.

It’s very much a collaborative effort, and I'm hoping that it can continue to be so and be utilized as a standard that the government can point to, instead of coming up with their own policies and practices that may actually not work as well as those defined by the industry.

Conway: Our colleagues on the public side of the public-private partnership that is addressing supply-chain integrity have recognized that we need to do it together.

More importantly, you need only to listen to a statement, which I know has often been quoted, but it’s worth noting again from EU Commissioner Algirdas Semeta. He recently said that in a globalized world, no country can secure the supply chain in isolation. He recognized that, again quoting, national supply chains are ineffective and too costly unless they’re supported by enhanced international cooperation.

Mindful focus

The one thing that we bring to bear here is a mindful focus on the fact that we need a public-private partnership to address comprehensively in our information and communications technology industry supply chain integrity internationally. That has been very important in our focus. We want to be a one-stop shop of best practices that the world can look at, so that we continue to benefit from commercial technology which sells globally and frequently builds once or on a limited basis.

Combining that international focus and the public-private partnership is something that's really coming home to roost in everyone’s minds right now, as we see security value migrating away from an end point and looking comprehensively at the product lifecycle or the global supply chain.

Lounsbury: I had the honor of testifying before the U.S. House Energy and Commerce Committee on Oversight Investigations, on the view from within the U.S. Government on IT security.
It was even more gratifying to see that the concerns that were raised in the hearings were exactly the ones that the OTTF is pursuing.


It was very gratifying to see that the government does recognize this problem. We had witnesses in from the DoD and Department of Energy (DoE). I was there, because I was one of the two voices on industry that the government wants to tap into to get the industry’s best practices into the government.

It was even more gratifying to see that the concerns that were raised in the hearings were exactly the ones that the OTTF is pursuing. How do you validate a long and complex global supply chain in the face of a very wide threat environment, recognizing that it can’t be any single country? Also, it really does need to be not a process that you apply to a point, but something where you have a standard that raises the bar for our security for all the participants in your supply chain.

So it was really good to know that we were on track and that the government, and certainly the U.S. Government, as we’ve heard from Edna, the European governments, and I suspect all world governments are looking at exactly how to tap into this industry activity.

Gardner: Where we are in the progression of OTTF?

Lounsbury: In the last 18 months, there has been a tremendous amount of progress. The thing that I'll highlight is that early in 2012, the OTTF published a snapshot of the standard. A snapshot is what The Open Group uses to give a preview of what we expect the standards will apply. It has fleshed out two areas, one on tainted products and one on counterfeit products, the standards and best practices needed to secure a supply chain against those two vulnerabilities.

So that’s out there. People can take a look at that document. Of course, we would welcome their feedback on it. We think other people have good answers too. Also, if they want to start using that as guidance for how they should shape their own practices, then that would be available to them.

Normative guidance

That’s the top development topic inside the OTTF itself. Of course, in parallel with that, we're continuing to engage in an outreach process and talking to government agencies that have a stake in securing the supply chain, whether it's part of government policy or other forms of steering the government to making sure they are making the right decisions. In terms of exactly where we are, I'll defer to Edna and Andras on the top priority in the group.

Gardner: Edna, what’s been going on at OTTF and where do things stand?

Conway: We decided that this was, in fact, a comprehensive effort that was going to grow over time and change as the challenges change. We began by looking at two primary areas, which were counterfeit and taint in that communications technology arena. In doing so, we first identified a set of best practices, which you referenced briefly inside of that snapshot.

Where we are today is adding the diligence, and extracting the knowledge and experience from the broad spectrum of participants in the OTTF to establish a set of rigorous conformance criteria that allow a balance between flexibility and how one goes about showing compliance to those best practices, while also assuring the end customer that there is rigor sufficient to ensure that certain requirements are met meticulously, but most importantly comprehensively.
Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.
We have a practice right now where we're going through each and every requirement or best practice and thinking through the broad spectrum of the development stage of the lifecycle, as well as the end-to-end nodes of the supply chain itself.

This is to ensure that there are requirements that would establish conformance that could be pointed to, by both those who would seek accreditation to this international standard, as well as those who would rely on that accreditation as the imprimatur of some higher degree of trustworthiness in the products and solutions that are being afforded to them, when they select an OTTF accredited provider.

Gardner: Andras, I'm curious where in an organization like IBM that these issues are most enforceable. Where within the private sector is the knowledge and the expertise to reside?

Szakal: Speaking for IBM, we recently celebrated our 100th anniversary in 2011. We’ve had a little more time than some folks to come up with a robust engineering and development process, which harkens back to the IBM 701 and the beginning of the modern computing era.

Integrated process

We have what we call the integrated product development process (IPD), which all products follow and that includes hardware and software. And we have a very robust quality assurance team, the QSE team, which ensures that the folks are following those practices that are called out. Within each of line of business there exist specific requirements that apply more directly to the architecture of a particular product offering.

For example, the hardware group obviously has additional standards that they have to follow during the course of development that is specific to hardware development and the associated supply chain, and that is true with the software team as well.

The product development teams are integrated with the supply chain folks, and we have what we call the Secure Engineering Framework, of which I was an author and the Secure Engineering Initiative which we have continued to evolve for quite some time now, to ensure that we are effectively engineering and sourcing components and that we're following these Open Trusted Technology Provider Standard (O-TTPS) best practices.

In fact, the work that we've done here in the OTTF has helped to ensure that we're focused in all of the same areas that Edna’s team is with Cisco, because we’ve shared our best practices across all of the members here in the OTTF, and it gives us a great view into what others are doing, and helps us ensure that we're following the most effective industry best practices.
We want to be able to encourage suppliers, which may be small suppliers, to conform to a standard, as we go and select who will be our authorized suppliers.


Gardner: Dan, at EMC, is the Product Security Office something similar to what Andras explained for how IBM operates? Perhaps you could just give us a sense of how it’s done there?

Reddy: At EMC in our Product Security Office, we house the enabling expertise to define how to build their products securely. We're interested in building that in as soon as possible throughout the entire lifecycle. We work with all of our product teams to measure where they are, to help them define their path forward, as they look at each of the releases of their other products. And we’ve done a lot of work in sharing our practices within the industry.

One of the things this standard does for us, especially in the area of dealing with the supply chain, is it gives us a way to communicate what our practices are with our customers. Customers are looking for that kind of assurance and rather than having a one-by-one conversation with customers about what our practices are for a particular organization. This would allow us to have a way of demonstrating the measurement and the conformance against a standard to our own customers.

Also, as we flip it around and take a look at our own suppliers, we want to be able to encourage suppliers, which may be small suppliers, to conform to a standard, as we go and select who will be our authorized suppliers.

Gardner: Dave, what would you suggest for those various suppliers around the globe to begin the process?

Publications catalog


Lounsbury: Obviously, the thing I would recommend right off is to go to The Open Group website, go to the publications catalog, and download the snapshot of the OTTF standard. That gives a good overview of the two areas of best practices for protection from tainted and counterfeit products we’ve mentioned on the call here.

That’s the starting point, but of course, the reason it’s very important for the commercial world to lead this is that commercial vendors face the commercial market pressures and have to respond to threats quickly. So the other part of this is how to stay involved and how to stay up to date?

And of course the two ways that The Open Group offers to let people do that is that you can come to our quarterly conferences, where we do regular presentations on this topic. In fact, the Washington meeting is themed on the supply chain security.

Of course, the best way to do it is to actually be in the room as these standards are evolved to meet the current and the changing threat environment. So, joining The Open Group and joining the OTTF is absolutely the best way to be on the cutting edge of what's happening, and to take advantage of the great information you get from the companies represented on this call, who have invested years-and-years, as Andras said, in making their own best practices and learning from them.

Gardner: Edna, what's on the short list of next OTTF priorities?
It's from that kind of information sharing, as we think in a more comprehensive way, that we begin to gather the expertise.


Conway: You’ve heard us talk about CNCI, and the fact that cybersecurity is on everyone’s minds today. So while taint embodies that to some degree, we probably need to think about partnering in a more comprehensive way under the resiliency and risk umbrella that you heard Dan talk about and really think about embedding security into a resilient supply chain or a resilient enterprise approach.

In fact, to give that some forethought, we actually have invited at the upcoming conference, a colleague who I've worked with for a number of years who is a leading expert in enterprise resiliency and supply chain resiliency to join us and share his thoughts.

He is a professor at MIT, and his name is Yossi Sheffi. Dr. Sheffi will be with us. It's from that kind of information sharing, as we think in a more comprehensive way, that we begin to gather the expertise that not only resides today globally in different pockets, whether it be academia, government, or private enterprise, but also to think about what the next generation is going to look like.

Resiliency, as it was known five years ago, is nothing like supply chain resiliency today, and where we want to take it into the future. You need only look at the US national strategy for global supply chain security to understand that. When it was announced in January of this year at Davos by Secretary Napolitano of the DHS, she made it quite clear that we're now putting security at the forefront, and resiliency is a part of that security endeavor.

So that mindset is a change, given the reliance ubiquitously on communications, for everything, everywhere, at all times -- not only critical infrastructure, but private enterprise, as well as all of us on a daily basis today. Our communications infrastructure is essential to us.

Thinking about resiliency

Given that security has taken top ranking, we’re probably at the beginning of this stage of thinking about resiliency. It's not just about continuity of supply, not just about prevention from the kinds of cyber incidents that we’re worried about, but also to be cognizant of those nation-state concerns or personal concerns that would arise from those parties who are engaging in malicious activity, either for political, religious or reasons.

Or, as you know, some of them are just interested in seeing whether or not they can challenge the system, and that causes loss of productivity and a loss of time. In some cases, there are devastating negative impacts to infrastructure.
We'll then be able to take that level of confidence and assurance that we get from knowing that and translate it to the people who are acquiring our technology as well.


Szakal: There's another area too that I am highly focused on, but have kind of set aside, and that's the continued development and formalization of the framework itself that is to continue the collective best practices from the industry and provide some sort of methods by which vendors can submit and externalize those best practices. So those are a couple of areas that I think that would keep me busy for the next 12 months easily.

Gardner: What do IT vendors companies gain if they do this properly?

Secure by Design

Szakal: Especially now in this day and age, any time that you actually approach security as part of the lifecycle -- what we call an IBM Secure by Design -- you're going to be ahead of the market in some ways. You're going to be in a better place. All of these best practices that we’ve defined are additive in effect. However, the very nature of technology as it exists today is that it will be probably another 50 or so years, before we see a perfect security paradigm in the way that we all think about it.

So the researchers are going to be ahead of all of the providers in many ways in identifying security flaws and helping us to remediate those practices. That’s part of what we're doing here, trying to make sure that we continue to keep these practices up to date and relevant to the entire lifecycle of commercial off-the-shelf technology (COTS) development.

So that’s important, but you also have to be realistic about the best practices as they exist today. The bar is going to move as we address future challenges.
Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in: