Saturday, February 6, 2010

ISM3 brings greater standardization to security measurement across enterprise IT

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

Security may be the hottest topic in IT. But it's also one of the least understood.

So BriefingDirect assembled a panel this week to examine the need for IT security to run more like a data-driven science, rather than a mysterious art form.

Rigorously applying data and metrics to security can dramatically improve IT results and reduce overall risk to the business. By employing and applying more metrics and standards to security, the protection of IT becomes better, and the known threats can become evaluated uniformly.

Standards like Information Security Management Maturity Model (ISM3) are helping to not only gain greater visibility, but also allowing IT leaders to scale security best practices repeatably and reliably.

With standards and greater reliance on data, security practitioners can understand better what they are up against, perhaps gaining close to real-time responses. They can know what's working -- or is not working -- both inside and outside of their organization.

The security metrics panel and sponsored podcast discussion are coming to you from The Open Group’s Enterprise Architecture Practitioners Conference in Seattle on Feb. 2, 2010. The goal is to determine the strategic imperatives for security metrics, and to discuss how to use them to change the outcomes in terms of IT’s value to the business.

Our panel consists of a security executive from The Open Group, as well as two experts on security who are presenting at the consortium's Security Practitioners Conference: Jim Hietala, Vice President for Security at The Open Group; Adam Shostack, co-author of The New School of Information Security, and Vicente Aceituno, director of the ISM3 Consortium. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Hietala: We think there's a contribution to make from The Open Group, in terms of developing the ISM3 standard and getting it out there more widely. [Being a data-driven security organization means] using information to make decisions, as opposed to what vendors are pitching at you, or your gut reaction. It's getting a little more scientific about gathering data on the kinds of attacks you're seeing and the kinds of threats that you face, and using that data to inform the decisions around the right set of controls to put in place to effectively secure the organization.

A presentation we had today from an analyst firm talked about people being all over the map [on security practices]. I wouldn’t say there's a lot of rigor and standardization around the kinds of data that’s being collected to inform decisions, but there is some of that work going on in very large organizations. There, you typically see a little more mature metrics program. In smaller organizations, not so much. It's a little all over the map.

... The important outputs of a good metrics program can be that it gives you a different way to talk to your senior management about the progress that you're making against the business objectives and security objectives.

That’s been an area of enormous disconnect. Security professionals have tended to talk about viruses, worms, relatively technical things, but haven't been able to show a trend to senior management that justifies the kind of spending they have been doing and the kind of spending they need to do in the future. Business language around some of that is needed in this area.

Shostack: We have an opportunity to be a heck of a lot more effective than we have been. We can say, "This control that we all thought was a really good idea -- well, everyone is doing it, and it's not having the impact that we would like." So, we can reassess how we're getting real, where we're putting our dollars.

The big change we've seen is that people have started to talk about the problems that they are having, as a result of laws passed in California and elsewhere that require them to say, "We made a mistake with data that we hold about you," and to tell their customers.

We've seen that a lot of the things we feared would happen haven't come to pass. We used to say that your company would go out of business and your customers would all flee. It's not happening that way. So, we're getting an opportunity today to share data in a way that’s never been possible before.

Aceituno: The top priority should be to make sure that the things you measure are things that are contributing positivity to the value that you're bringing to business as a information security management (ISM) practitioner. That’s the focus. Are you measuring things that are actually bringing value or are you measuring things that are fancy or look good?

Because metrics are all about controlling what you do and being able to manage the outputs that you produce and that contribute value to the business ... you can use metrics to manage internal factors.

I don’t think it brings a bigger return on investment (ROI) to collect metrics on external things that you can't control. It’s like hearing the news. What can you do about it? You're not the government or you're not directly involved. It's only the internal metrics that really make sense.

Basically, we link business goals, business objectives, and security objectives in a way that’s never been done before, because we are painfully detailed when we express the outcomes that you are supposed to get from your ISM system. That will make it far easier for practitioners to actually measure the things that matter.

Business value approach

Shostack: Vicente’s point about measuring the things you can control is critical. Oftentimes in security, we don’t like to admit that we've made mistakes and we conceal some of the issues that are happening. A metrics initiative gives you the opportunity to get out there and talk about what's going on, not in a finger pointing way, which has happened so often in the past, but in an objective and numerically centered way. That gives us opportunity to improve.

Hietala: There's some taxonomy work to be done. One of the real issues in security is that when I say "threat," do other people have the same understanding? Risk management is rife with different terms that mean different things to different people. So getting a common taxonomy is something that makes sense.

The kinds of metrics we're collecting can be all over the map, but generally they're the things that would guide the right kind of decision making within an IT security organization around the question, "Are we doing the right things?"

Today, Vicente used an example of looking at vulnerabilities that are found in web applications. A critical metric was how long those vulnerabilities are out there before they get fixed by different lines of business, by different parts of the business, looking at how the organization is responding to that. We're trying to drive that metric toward the vulnerabilities being open for less time and getting fixed quicker.

Shostack: We've seen over the last few years that those security programs that succeed are the ones that talk to the business needs and talk to the executive suite in language that the executives understand.

We've seen over the last few years that those security programs that succeed are the ones that talk to the business needs and talk to the executive suite in language that the executives understand.



The real success here and the real step with ISM3 is that it gives people a prescriptive way to get started on building those metrics.

You can pick it up and look at it and say, "Okay, I'm going to measure these things. I'm going to trend on them." And, I'm going to report on them."

As we get toward a place, where more people are talking about those things, we'll start to see an expectation that security is a little bit different. There is a risk environment that's very outside of people's control, but this gives people a way to get a handle on it.

Aceituno: The main task of the ISM3 Consortium so far was to manage the ISM3 standard. I'm very happy to say that The Open Group and ISM3 Consortium reached an agreement and, with this agreement, The Open Group will be managing ISM3 from here on in. We'll be devoting our time to other things, like teaching and consulting services in Spain, which is our main market. I can't think of anything better than for ISM3 to be managed from The Open Group.

Hietala: You have metrics and control approaches in various areas and you can pick a starting point. You can come at this top-down, if you're trying to implement a big program. Or, you come at it bottoms-up and pick a niche, where you know you are not doing well and want to establish some rigor around what you are doing. You can do a smaller implementation and get some benefit out of it. It's approachable either way.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

Thursday, February 4, 2010

ArchiMate gives business leaders and IT architects a common language to describe how the enterprise works best

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

Our next podcast discussion looks at ArchiMate, a way of conceptualizing, modeling, and controlling enterprise architecture (EA) and business architecture.

ArchiMate provides ways to develop visualizations and control of IT architecture to more swiftly obtain business benefits. To learn more, we interview an expert on this, Dr. Harmen van den Berg, partner and co-founder at BiZZdesign.

This podcast was recorded Feb. 2 at The Open Group’s Enterprise Architecture Practitioners Conference in Seattle the week of Feb. 1, 2010. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: I really enjoyed your presentation on ArchiMate. How did the standard come about?

Dr. Harmen van den Berg: ArchiMate was developed in the Netherlands by a number of companies and research institutes. They developed it, because there was a lack of a language for describing EA. After it was completed, they offered it to The Open Group as a standard.

Gardner: What problems does it solve?

Van den Berg: The problem that it solves is that you need a language to express yourself, just like normal communication. If you want to talk about the enterprise and the important assets in the enterprise, the language supports that conversation.

Gardner: We are talking about more and more angles on this conversation, now that we talk about cloud computing and hybrid computing. It seems as if the complexity of EA and the ability to bring in the business side, provide them with a sense of trust in the IT department, and allow the IT department to better understand the requirements of the business, all need a new language. Do you think it can live up to that?

Van den Berg: Yes, because if you look at other language, like UML, which is for system development and is a very detailed language, it only covers a very limited part of the complete enterprise. ArchiMate is focused on giving you a language for describing the complete enterprise, from all different angles, not on a detailed level, but on a more global level, which is understandable to the business as well.

Gardner: So more stakeholders can become involved with something like ArchiMate. I guess that's an important benefit here.

Van den Berg: Yes, because the language is not focused only on IT, but on the business as well and on all kinds of stakeholders in your organization.

Gardner: How would someone get started, if they were interested in using ArchiMate to solve some of these problems? What is the typical way in which this becomes actually pragmatic and useful?

Van den Berg: The easiest way is just to start describing your enterprise in terms of ArchiMate. The language forces you to describe it on a certain global level, which gives you direct insight in the coherence within your enterprise.

Gardner: So, this allows you to get a meta-view of processes and assets that are fundamentally in IT, but have implication for and reverberate around the business.

Don't have to start in IT

Van den Berg: You don't have to start in IT. You can just start at the business side. What are products? What are services? And, how are they supported by IT?" That's a very useful way to start, not from the IT side, but from the business side.

Gardner: Are there certain benefits or capabilities in ArchiMate that would, in fact, allow it to do a good job at defining and capturing what goes on across an extended enterprise, perhaps hybrid sourcing or multiple sourcing of business processes and services?

Van den Berg: It's often used, for example, when you have an outsourcing project to describe not only your internal affairs, but also your relation with other companies and other organizations.

Gardner: What are some next steps with ArchiMate within The Open Group as a standard? Tell us what it might be maturing into or what some of the future steps are.

Van den Berg: The future steps are to align it more with TOGAF, which is the process for EA, and also extending it to cover more elements that are useful to describe an EA.

It's often used, for example, when you have an outsourcing project to describe not only your internal affairs, but also your relation with other companies and other organizations.



Gardner: And for those folks who would like to learn more about ArchiMate and how to get this very interesting view of their processes, business activities, and IT architecture variables where would you go?

Van den Berg: The best place to go is The Open Group website. There is a section on ArchiMate and it gives you all the information.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

'Business architecture' helps business and IT leaders decide on and communicate changes at the new speed of business

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

What's the difference between enterprise architecture (EA) and business architecture (BA)? We pose the question to Tim Westbrock, Managing Director of EAdirections, as part of a sponsored podcast discussion coming to you from The Open Group’s Enterprise Architecture Practitioners Conference in Seattle, the week of Feb. 1, 2010.

The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: I really enjoyed your presentation today. Can you tell us a little bit about some of the high-level takeaways. Principally, how do you define BA?

Westbrock: Well, the premise of my discussion today is that, in order for EA to maintain and continue to evolve, we have to go outside the domain of IT. Hence, the conversation about BA. To me, BA is an intrinsic component of EA, but what most people really perform in most organizations that I see is IT architecture.

A real business-owned enterprise business architecture and enterprise information architecture are really the differentiating factors for me. I'm not one of these guys that is straight about definitions. You’ve got to get a sense from the words that you use.

To me enterprise business architecture is a set of artifacts and methods that helps business leaders make decisions about direction and communicate the changes that are required in order to achieve that vision.

Gardner: How do we get here? What's been the progression? And, why has there been such a gulf between what the IT people eat, sleep, and drink, and what the business people expect?

Westbrock: There are a lot of factors in that. Back in the late '80s and early '90s, we got really good at providing solutions really quickly in isolated spots. What happened in most organizations is that you had really good isolated solutions all over the place. Integrated? No. Was there a need to integrate? Eventually. And, that's when we began really piling up the complexity.

We went from an environment, where we had one main vendor or two main vendors, to every specific solution having multiple vendors contributing to the software and the hardware environment.

That complexity is something that the business doesn’t really understand, and we haven’t done a real good job of getting the business to understand the implications of that complexity. But, it's not something they should really be worried about. It's our excuse sometimes that it's too complex to change quickly.

Focus on capabilities

We really need to focus the conversation on capabilities. Part of my presentation talked about deriving capabilities as the next layer of abstraction down from business strategy, business outcomes, and business objectives. It's a more finite discussion of the real changes that have to happen in an organization, to the channel, to the marketing approach, to the skill mix, and to the compensation. They're real things that have to change for an organization to achieve its strategies.

In IT architecture, we talk about the changes in the systems. What are the changes in the data? What are the changes in the infrastructure? Those are capabilities that need to change as well. But, we don't need to talk about the details of that. We need to understand the capabilities that the business requires. So, we talk to folks a lot about understanding capabilities and deriving them from business direction.

Gardner: It seems to me that, over the past 20 or 30 years, the pace of IT technological change was very rapid -- business change, not so much. But now, it seems as if the technology change is not quite as fast, but the business change is. Is that a fair characterization?

Westbrock: It's unbelievably fast now. It amazes me when I come across an organization now that's surviving and they can't get a new product out the door in less than a year -- 18 months, 24 months. How in a world are they responding to what their customers are looking for, if it takes that long to get system changes products out the door?

BA is a means by which we can engage as IT professionals with the business leadership, the business decision-makers who are really deciding how the business is going to change.



We're looking at organizations trying monthly, every six weeks, every two months, quarterly to get significant product system changes out the door in production. You've got to be able to respond that quickly.

Gardner: So, in the past, the IT people had to really adapt and change to the technology that was so rapidly shifting around them, but now the IT people need to think about the rapidly shifting business environment around them.

Westbrock: "Think about," yes, but not "figure out." That's the whole point. BA is a means by which we can engage as IT professionals with the business leadership, the business decision-makers who are really deciding how the business is going to change.

Some of that change is a natural response to government regulations, competitive pressures, political pressures, and demographics, but some of it is strategic, conscious decisions, and there are implications and dependencies that come along with that.

Sometimes, the businesses are aware of them and sometimes they're not. Sometimes, we understand as IT professionals -- some not all -- about those dependencies and those implications. By having that meaningful dialogue on an ongoing basis, not just as a result of the big implementation, we can start to shorten that time to market.

Gardner: So, the folks who are practitioners of BA, rather than more narrowly EA, have to fill this role of Rosetta Stone in the organization. They have to translate cultural frames of mind and ideas about the priorities between that IT side and the business side.

Understanding your audience

Westbrock: This isn't a technical skill, but understanding your audience is a big part of doing this. We like to joke about executives being ADD and not really being into the details, but you know what, some are. We've got to figure out the right way to communicate with this set of leadership that's really steering the course for our enterprise.

That's why there's no, "This is the artifact to create." There's no, "This is the type of information that they require." There is no, "This is the specific set of requirements to discuss."

That's why we like to start broad. Can you build the picture of the enterprise on one page and have conversations maybe that zero in on a particular part of that? Then, you go down to other levels of detail. But, you don't know that until you start having the conversation.

Gardner: Okay, as we close out, you mentioned something called "strategic capability changes." Explain that for us?

. . . There's a missing linkage between that vision, that strategy, that direction, and the actual activities that are going on in an organization.



Westbrock: To me, so many organizations have great vision and strategy. It comes from their leadership. They understand it. They think about it. But, there's a missing linkage between that vision, that strategy, that direction, and the actual activities that are going on in an organization. Decisions are being made about who to hire, the kinds of projects we decide to invest in, and where we're going to build our next manufacturing facility. All those are real decisions and real activities that are going on on a daily basis.

This jump from high-level strategy down to tactical daily decision-making and activities is too broad of a gap. So, we talk about strategic capability changes as being the vehicle that folks can use to have that conversation and to bring that discussion down to another level.

When we talk about strategic capability changes, it's the answer to the question, "What capabilities do we need to change about our enterprise in order to achieve our strategy?" But, that's a little bit too high level still. So, we help people carve out the specific questions that you would ask about business capability changes, about information capability changes, system, and technology.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

The Open Group SOA Work Group making strides to deepen link between TOGAF and SOA

This guest post comes courtesy of Mats Gejnevall, a global enterprise architect from Capgemini.

By Mats Gejnevall

As an attendee at this week’s Open Group Seattle Conference 2010, I met with an international group of enterprise architecture (EA) thought-leaders including Dave Hornford, the new chair of The Open Group Architecture Forum, Tony Carrato from IBM, Steve Bennett from Oracle and Chris Greenslade from CLARS to enhance the TOGAF practical guide to do service oriented architectures.

In case you’re wondering, the practical guide is a best-practice tool for EA practitioners to speed-up and unify the way the industry creates service oriented architectures (SOA). One item we hope to make clear to the industry is that service orientation is not only about producing some web-services and hoping that will improve the agility and cost structures of the organization.

The evolution to service orientation should be a carefully orchestrated process that includes everything from assessing an enterprise’s ability to change, to identifying the areas that really need service orientation properties, to creating the strategic, segment and capability architectures for those areas and finally, to defining the transition roadmap to implement the SOA strategy.

Our discussions focused on creating a guide that is easy-to-understand and use, but that would also serve as a complete description of how the different phases of TOGAF should be adapted by an enterprise. The result is a user-friendly path through the TOGAF framework with continuous references to the TOGAF content meta-model.

One important issue to always keep in mind is that EA is not about doing low-level IT design . . .



Our work validates the claim that TOGAF is valid for all types of architecture styles, while also proving that there are many “ifs” and “buts” during an organization’s adoption path.

One important issue to always keep in mind is that enterprise architecture (EA) is not about doing low-level IT design, but about creating structures in your organization that fulfill your long-term business goals and ambitions. The low-level design activities will be performed during the actual implementation of the project.

Additionally, we discussed the practical guide’s relationship with other Open Group SOA projects (such as SOA Governance and SOA Reference Architecture) in great detail to ensure that the input from the meta-model objects to these projects were properly included and identified.

The resulting practical guide is due the first part of this year. More information on The Open Group SOA Work Group can be found here: http://www.opengroup.org/projects/soa

This guest post comes courtesy of Mats Gejnevall, a global enterprise architect from Capgemini.

The Open Group seeks to spur evolution of security management from an art to a science

This guest post comes courtesy of Jim Hietala, Vice President of Security for The Open Group.

By Jim Hietala

As we wrapped up day one of the Security Practitioners Conference Plenary here at The Open Group Seattle Conference this week, I must say we heard excellent presentations on security management and metrics from Adam Shostack at Microsoft, Vicente Aceituno from ISM3 Consortium, Mike Jerbic at Trusted Systems Consulting, Phil Schacter from The Burton Group, and Kip Boyle from Pemco Insurance.

Some of the key takeaways included:
  • There is a real need for better external, big-picture data about attacks and the available controls that are in place and the control effectiveness. Without objective data of this sort, it’s difficult to have an intelligent discussion as to whether things are getting better or worse, to develop an understanding of attacks and threat vectors, and what really constitutes best practice controls. Data from sources such as the Verizon Data Breach Investigations report and DataLossDB are highly valuable, but more data (and more detailed data) is needed.

  • There’s also a clear need to instrument our security programs, being careful to measure the right things. Security metrics are best when they directly support decision-making supporting business goals. Put another way, for an e-commerce company, a security metric that informs management as to how many viruses are scrubbed from desktops is not really relevant to the mission. A metric that measures the mean time to remediate web application vulnerabilities is directly relevant, as reducing this is very consequential to the overall business goal.

  • Adding a maturity level approach to information security management (as is done in ISM3, a new security management project in The Open Group Security Forum) makes this method a lot more approachable for more kinds of businesses. In other words, a higher maturity level that might be appropriate for a Fortune 100 company or a defense firm is unattainable for a typical small- to medium-sized business.

  • Continuous improvement in managing information security depends on effective, relevant metrics.
It's clear that security management is steadily moving from art to science. Effective metrics and a maturity model approach are critical to helping this transition to happen.

For more information about the work The Open Group Security Forum is doing to encourage the evolution of security management, please visit: http://www.opengroup.org/security/.

This guest post comes courtesy of Jim Hietala, Vice President of Security for The Open Group.

New definition of enterprise architecture emphasizes 'fit for purpose' across IT undertakings

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

This live event podcast discussion comes to you from The Open Group’s Enterprise Architecture Practitioners Conference in Seattle, the week of Feb. 1, 2010.

We examine the definition of enterprise architecture (EA), the role of the architect and how that might be shifting with an expert from the Open Group, Len Fehskens, Vice President of Skills and Capabilities. The interview is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: I was really intrigued by your presentation, talking, with a great deal of forethought obviously, about the whole notion of EA, the role of the architect, this notion of "fit for purpose." We want to have the fit-for-purpose discussion about EA. What are the essential characteristics of this new definition?

Fehskens: You'll remember that one of the things I hoped to do with this definition was understand the architecture of architecture, and that the definition would basically be the architecture of architecture. The meme, so to speak, for this definition is the idea that architecture is about three things: mission, solution, and environment. Both the mission and the solution exist in the environment, and the purpose of the architecture is to specify essentials that address fitness for purpose.

There are basically five words or phrases; mission, solution, environment, fitness for purpose, and essentials. Those capture all the ideas behind the definition of architecture.
Also from the conference: Learn how The Open Group's Cloud Work Group is progressing.
Gardner: The whole notion of EA has been in works for 30 years, as you pointed out. What is it about right now in the maturity of IT and the importance of IT in modern business that makes this concept of enterprise architect so important?

Fehskens: A lot of practicing enterprise architects have realized that they can't do enterprise IT architecture in isolation anymore. The constant mantra is "business-IT alignment." In order to achieve business-IT alignment, architects need some way of understanding what the business is really about. So, coming from an architectural perspective, it becomes natural to think of specifying the business in architectural terms.

We need to talk to business people to understand what the business architecture is, but the business people don't want to talk tech-speak.



Enterprise architects are now talking more frequently about the idea of "business architecture." The question becomes, "What do we really mean by business architecture?" We keep saying that it's the stakeholders who really define what's going on. We need to talk to business people to understand what the business architecture is, but the business people don't want to talk tech-speak.

We need to be able to talk to them in their language, but addressing an architectural end. What I tried to do was come up with a definition of architecture and EA that wasn't in tech-speak. That would allow business people to relate to concepts that make sense in their domain. At the same time, it would provide the kind of information that architects are looking for in understanding what the architecture of the business is, so that they can develop an EA that supports the needs of the business.

Gardner: So, in addition to defining EA properly for this time and place, and with the hindsight of the legacy, development, and history of IT and now business, what is the special sauce for a person to be able to fill that role? It’s not just about the definition, but it's also about the pragmatic analog world, day-in and day-out skills and capabilities.

Borrowed skills

Fehskens: That's a really good question. I've had this conversation with a lot of architects, and we all pretty much agree that maybe 90 percent of what an architect does involves skills that are borrowed from other disciplines -- program management, project management, governance, risk management, all the technology stuff, social skills, consulting skills, presentation skills, communication skills, and all of that stuff.

But, even if you’ve assembled all of those skills in a single individual, there is still something that an architect has to be able to do to take advantage of those capabilities and actually do architecture and deliver on the needs of their clients or their stakeholders.

I don't think we really understand yet exactly what that thing is. We’ve been okay so far, because people who entered the discipline have been largely self-selecting. I got into it because I wanted to solve problems bigger than I could solve myself by writing all code. I was interested in having a larger impact then I could just writing a single program or doing something that was something that I could do all by myself.

That way, we filter out people who try to become architects. Then, there's a second filter that applies: if you don't do it well, people don't let you do it. We're now at the point where people are saying, "That model for finding, selecting, and growing architects isn't going to work anymore, and we need to be more proactive in producing and grooming architects." So, what is it that distinguishes the people who have that skill from the people who don't?

An architect also has to be almost Sherlock Holmes-like in his ability to infer from all kinds of subtle signals about what really matters.



If you go back to the definition of architecture that I articulated in this talk, one of the things that becomes clear is that an architect not only has to have good design skills. An architect also has to be almost Sherlock Holmes-like in his ability to infer from all kinds of subtle signals about what really matters, what's really important to the stakeholders, and how to balance all of these different things in a way that ends up focusing on an answer to this very squishily, ill-defined statement of the problem.

This person, this individual, needs to have that sense of the big picture -- all of the moving parts -- but also needs to be able to drill in both at the technical detail and the human detail.

In fact, this notion of fitness for purpose comes back in. As I said before, an architect has to be able to figure out what matters, not only in the development of an architectural solution to a problem, but in the process of discerning that architecture. There's an old saw about a sculptor. Somebody asked him, "How did you design this beautiful sculpture," and he says, "I didn't. I just released it from the stone."

What a good architect does is very similar to that. The answer is in there. All you have to is find it. In some respects, it's not so much a creative discipline, as much as it's an exploratory or searching kind of discipline. You have to know where to look. You have to know which questions to ask and how to interpret the answers to them.

Rarely done

Gardner: One of the things that came out early in your presentation was this notion that architecture is talked about and focused on, but very rarely actually done. If it's the case in the real world that there is less architecture being done than we would think is necessary, why do it at all?

Fehskens: There's a lot of stuff being done that is called architecture. A lot of that work, even if it's not purely architecture in the sense that I've defined architecture, is still a good enough approximation so that people are getting their problems solved.

What we're looking for now, as we aspire to professionalize the discipline, is to get to the point where we can do that more efficiently, more effectively, get there faster, and not waste time on stuff that doesn't really matter.

I'm reminded of the place medicine was 100 or 150 years ago. I hate to give leeches a bad name, because we’ve actually discovered that they're really useful in some medical situations. But, there was trepanning, where they cut holes in a person's skull to release vapors, and things like that. A lot of what we are doing in architecture is similar.

What we want to do is get better at that, so that we pick the right things to do in the right situations, and the odds of them actually working are much higher than better than chance.



We do stuff because it's the state of the art and other people have tried it. Sometimes, it works and sometimes, it doesn't. What we want to do is get better at that, so that we pick the right things to do in the right situations, and the odds of them actually working are much higher than better than chance.

Gardner: Okay, a last question. Is there anything about this economic environment and the interest in cloud computing and various sourcing options and alternatives that make the architecture role all the more important?

Fehskens: I hate to give you the typical architect signature which is, "Yes, but." Yes, but I don't think that's a causal a relationship. It's sort of a coincidence. In many respects, architecture is the last frontier. It's the thing that's ultimately going to determine whether or not an organization will survive in an extremely dynamic environment. New technologies like cloud are just the latest example of that environment changing radically.

It isn't so much that cloud computing makes good EA necessary, as much as cloud computing is just the latest example of changes in the external environment that require organizations to have enterprise architects to make sure that the organization is always fit for purpose in an extremely dynamically changing environment.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Part 4 of 4: Real-time web data services in action at Deutsche Boerse

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download the transcript. Learn more. Sponsor: Kapow Technologies.


Welcome to a special BriefingsDirect dual webinar and podcast presentation, Real-Time Web Data Services in Action at Deutsche Börse.

As the culmination of a four-part series on web data services (WDS), we examine a fascinating use-case for data services with Deutsche Börse Group in Frankfurt, Germany. An innovative information service recently created there highlights how real-time content and data assembled from various online sources scattered across the Web provides a valuable analysis service.

The offering supports energy traders seeking to track global fluctuations and micro trends in oil and other related markets. But, the need for real-time and precise data affects more than energy traders and financial professionals. More than ever, all sorts of businesses need to know what's going on in and what's being said about their respective markets, products, and services.

In this series with Kapow Technologies, we've examined the need for WDS and ways that WDS and related tools can be used broadly to solve these problems. Now, we are going to learn the full story of how Deutsche Börse took web data resources, and not only efficiently assembled knowledge from automated robots, cleansing tools, and analytics management, but from these capabilities they also created high value and focused WDS offerings onto itself.

Thanks for joining us, as we take an in-depth look at how the market for WDS has shaped up and then hear directly from the leader of the Deutsche Börse project, as well as from a key supplier that supported them in accomplishing their web services goal.

Access the full series of podcasts on web data services:
So, to learn more about WDS as a business, please welcome our guests, Mario Schultz, director of Energy Facts at Deutsche Börse Group, and Stefan Andreasen, CTO at Kapow Technologies. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: It's interesting to me that we've moved beyond a level of static information to dynamic information and yet we still haven’t taken full advantage of everything that’s being developed and created across the Web.

But today’s market turbulence demands that we do that. We have to move into an era where we can take quality data and provide agility into how we can consume and distribute it. We're dealing with more diverse data sources. That means we need to have completeness and we need to be comprehensive, in order to accomplish the business information challenges each business faces.

The need now is for flexible, agile, and mixed sourcing of services and data together.



The need now is for flexible, agile, and mixed sourcing of services and data together. The content is often portable. That means it's ubiquitous across mobile devices and social networks in such a way that real-time analytics becomes extremely important.

The use of data as a business is now coming to the fore. We're beginning to see value, not from just the assimilation of data for use internally, but as more and more businesses are starting to take advantage of the data that they create and have access to. They share that with their partners, create ecosystems of value, and then even perhaps sell outright the information, as well as insights and analysis from that information.

Schultz: Deutsche Börse is the German stock exchange in Frankfurt, Germany, and we offer all kinds of products and services around on-exchange trading and the adjacent processes. For several years now, I've been responsible for developing new products and services around information for on-exchange or off-exchange trading. This is why we've invented and developed the Energy Facts service.

We developed new products and services where we could transform our know-how and this real-time connection, aggregation, and dissemination of data to other business lines. This is why we looked into the energy trading sector, mainly focused on the power trading here in Europe.

I began by working on the exchange of information that we have in our own systems. We were proceeding with our ideas of enhancing our services and designing new products and services. We were then looking into the Web and trying to get more information from the data that we gather from websites -- or somewhere else on the global Web -- and to integrate this with our own company's internal information.

Everything we do focuses on the real-time aspect. Our use of web data services is always focusing on the real-time aspects of this.

At Deutsche Börse, we have something that’s called Xetra, our electronic trading system for cash products. We have Eurex, our derivative business line, which is worldwide, well-known, where you can trade other derivatives on that platform.

We have a main system called CEF. It is our backbone IT solution for delivering data in real-time with milliseconds optimization. The data is mainly coming from our internal IT systems, like Xetra and Eurex, and we deliver this data to the outside world.


In addition, we calculate all the relevant indices, like the DAX, the flagship index for the German markets with 30 instruments, and more than 2,000 -- or nearly 3,000 -- indices that are distributed over the well-known data vendors, for example, Bloomberg or Reuters. They are our main distribution networks, where we are delivering all our information.

Germany is currently the most important market for energy and power trading in the middle of Europe.



By talking to well-known players in the market, we quickly recognized that we could build up a very powerful and fundamental data models. You have to collect all the relevant information to get an overview and to get an estimate about the price, in this case, where power could develop and in which direction it could develop.

Traders are looking into the fundamental factors that affect the price of the energy or the power that you trade, whether it’s oil or whatever. That’s how we started with power trading. You have the wind and other weather factors. You have temperature. You have the availability of power plants. So, you try to categorize and summarize these sectors. It's called the supply and the demand side regarding this energy trading.

Fundamental data models

The main issue and main task in the beginning was to collect the relevant data. Quite quickly, we were able to set up a big list of all relevant data sets or sources, especially for Germany and some adjacent countries. We came up with something around 70, 80, or even 100 different sources on the Web to grab information from. So, the main issue was how to collect and grab all this data in a manageable way into one data base. That was the first step.

In the second step, Kapow came into this play. We recognized that it’s really important to have a one-stop shopping inbound channel that collects all the information from these sources, so that you don’t have to have have several IT systems, or your own program, JavaScript, or whatever to get the information.

I wanted to have a responsible product manager for this project or for this new product. From the beginning, I had to have a good technology in place that would be able to handle all these kind of sources from the Web.

We recognized that there are so many different data formats that we had to grab. There are all these different providers of information in Germany and other European countries. They have their own websites. Some give the data in HTML format. Others use XLS, CSV, or even PDFs.

Kapow tells us how to get this information from these different sources in quite different formats. This is a manageable way, with a process-driven or graphical user interface (GUI) driven tool, that would use the effort, the personal, the manpower efforts to collect and grab the data.

Not only websites

Currently, we have 70 or 80 sources that we're grabbing. It's not only websites, but we have some third-party providers that are delivering information, for example, weather, temperature, and things like that. We have providers giving data via FTP service, and we even use Kapow for grabbing data from these third-party players. As I said, it's a one-stop shopping solution to get everything via one channel.

The value-add was to grab all this data into one common data format, one database, so we would be able to deliver this data to the vendors via web tool, web terminal, or even our existing CEF data feeds. A lot of the players in the market are trying to collect this data by themselves, or even manually, to get an overview of where the power price would develop over the next day, hours, weeks, months, whatever.


Andreasen: This is an extremely impressive service that Mario just showed us here, and I'm sure, if you're dealing with buying and selling energy, this is a must for you to be sure you made the right decision.

If these data sources exist somewhere on the Web, we can actually grab them where they are. What you traditionally do with information gathering is that you call every company or every entity that has data and ask them, "Will you please provide the data in this or this format?" But, with Kapow Web Data Services, you can just grab the data, wherever it is on the Web, and assemble this valuable data source much easier and much faster.

Businesses are relying more and more on data to make the right decision, and their focus is on quality, completeness, and agility. Let's be more practical here and ask how you actually get this data.

There is a term, data integration, which is about accessing the data and providing it in standard API, so that you can actually leverage the measure of business application.

Energy Facts is accessing this data at the 70-80 different data sources, as Mario said, and providing it as a feed that depends on the volatility of the different data sources. Some of the data delivers every minute, and some deliver every four hours, etc., based on how quickly the data source changes. WDS is all about getting access to this data where it resides.

There are really two different kinds of data sources. One set of data sources is more like a real-time source data source. Let's say you go to a patent directory, and there are probably millions of patents. In that case you would use Kapow Data Server to wrap that data source into a service layer, and then you would be able to do real-time, as soon as you get real-time results back. So, that's real-time access, where you have vast amount of information.

Actually, all styles exist, but there is a tendency for many companies to actually access the data where it is, rather than trying to consolidate it to a new place.



The other scenario, and I think that's more what we see in the Energy Facts example here, is where you have a more limited data source, and you are actually trying to do a consolidation of the data into a database, and then you use that database to serve different customers or different applications.

With Kapow, you can actually go in and access the data, if you can see them on your browser. That's one thing. The other thing you need to do to make this data available to your business application is to transform and enrich the data, so that it actually matches the format that you want.

For example, on the website, it might have the date saying, "2 hours ago" or "3 minutes ago" and so on. That's really not useful. What you really want is a time stamp with the hour, the second, the minute, the months, the day, the year, so you can actually start comparing these. So, data cleansing is an extremely important part of data extraction and access.

The last thing, of course, is serving the data in the format you need. That can be a database, if you're doing consolidation, or it can be as an API, if you are doing more of a federated access to data, and leaving the data where it is.

Actually, all styles exist, but there is a tendency for many companies to actually access the data where it is, rather than trying to consolidate it to a new place.

More examples on data as a service

Go to our website and download a white paper from one of our customers, called Fiserv. It's a large financial services company in the U.S. Fiserv has a lot of business partners, actually they have more than 300 banks in more than 10 countries as business partners. Because they're selling services, it's incredibly important for them to also monitor their customers to understand what's happening.

They had lot of people who logged into these 300 partner banks every day and grabbed some financial information, such as interest rates, etc., into an Excel spreadsheet, put it into a database, and then got it up on a dashboard.

The thing about this is that, first, you have a lot of human labor, which can cause human errors, and so on. You can only do it once a day, and it's a tedious process. So what they did is got Kapow in and automated the extraction of this data from all their business partners -- 300 banks in more than 10 countries.

They can now get that data in near real-time, so they don’t have to wait for data. They don’t have to go without on the weekend, because people are not working. They get that very business critical insights to the market and their partners instantly through our product.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download the transcript. Learn more. Sponsor: Kapow Technologies.

You may also be interested in:

Wednesday, February 3, 2010

CERN’s evolution toward cloud computing could portend next revolution in extreme IT productivity

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Platform Computing.

What are the likely directions for cloud computing? Based on the exploration of expected cloud benefits at a cutting edge global IT organization, the future looks extremely productive.

In this podcast we focus on the thinking on how cloud computing -- both the private and public varieties -- might be used at CERN, the European Organization for Nuclear Research in Geneva.

CERN has long been an influential bellwether on how extreme IT problems can be solved. Indeed, the World Wide Web owes a lot of its usefulness to early work done at CERN. Now the focus is on cloud computing. How real is it, and how might an organization like CERN approach cloud?

In many ways CERN is quite possibly the New York of cloud computing. If cloud can make it there, it can probably make it anywhere. That's because CERN deals with fantastically large data sets, massive throughput requirements, a global workforce, finite budgets, and an emphasis on standards and openness.

So please join us, as we track the evolution of high-performance computing (HPC) from clusters to grid to cloud models through the eyes of CERN, and with analysis and perspective from IDC, as well as technical thought leadership from Platform Computing.

Join me in welcoming our panel today: Tony Cass, Group Leader for Fabric Infrastructure and Operations at CERN; Steve Conway, Vice President in the High Performance Computing Group at IDC, and Randy Clark, Chief Marketing Officer at Platform Computing. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Conway: Private cloud computing is already here, and quite a few companies are exploring it. We already have some early adopters. CERN is one of them. Public clouds are coming. We see a lot of activity there, but it's a little bit further out on the horizon than private or enterprise cloud computing.

Just to give you an example, we at IDC just did a piece of research for one of the major oil and gas companies, and they're actively looking at moving part of their workload out to cloud computing in the next 6-12 months. So, this is really coming up quickly.

CERN is clearly serious about it in their environment. As I said, we're also starting to see activity pick up with cloud computing in the private sector with adoption starting somewhere between six months from now and, for some, more like 12-24 months out.

Clark: At Platform Computing we have formally interviewed over 200 customers out of our installed base of 2,000. A significant portion -- I wouldn’t put an exact number on that, but it's higher than we initially anticipated -- are looking at private-cloud computing and considering how they can leverage external resources such as Amazon, Rackspace and others. So, it's easily one-third and possibly more [evaluating cloud].

Cass: At CERN we're a laboratory that exists to enable, initially Europe’s and now the world’s, physicists to study fundamental questions. Where does mass come from? Why don’t we see anti-matter in large quantities? What's the missing mass in the universe? They're really fundamental questions about where we are and what the universe is.

We do that by operating an accelerator, the Large Hadron Collider, which collides protons thousands of times a second. These collisions take place in certain areas around the accelerator, where huge detectors analyze the collisions and take something like a digital photograph of the collision to understand what's happening. These detectors generate huge amounts of data, which have to be stored and processed at CERN and the collaborating institutes around the world.

We have something like 100,000 processors around the world, 50 petabytes of disk, and over 60 petabytes of tape. The tape is in just a small number of the centers, not all of the hundred centers that we have. We call it "computing at the terra-scale," that's terra with two R's. We’ve developed a worldwide computing grid to coordinate all the resources that we have with the jobs of the many physicists that are working on these detectors.

If you look at the past, in the 1990’s, we had people collaborating, but there was no central management. Everybody was based at different institutes and people had to submit the workloads, the analysis, or the Monte Carlo simulations of the experiments they needed.

We realized in 2000-2001 that this wasn’t going to work and also that the scale of resources that we needed was so vast that it couldn’t all be installed at CERN. It had to be shared between CERN, a small number of very reliable centers we call the Tier One centers and then 100 or so Tier Two centers at the universities. We were developing this thinking around the same time as the grid model was becoming popular. So, this is what we’ve done.

Grid sets stage for seeking greater efficiencies

[Our grid] pushes the envelope in terms of the scale to make sure that it works for the users. We connect the sites. We run tens of thousands of jobs a day across this and gradually we’ve run through a number of exercises to distribute the data at gigabytes a second and tens of thousands of jobs a day.

We've progressively deployed grid technology, not developed it. We've looked at things that are going on elsewhere and made them work in our environment.

The grid solves the problem in which we have data distributed around the world and it will send jobs to the data. But, there are two issues around that. One is that if the grid sends my job to site A, it does so because it thinks that a batch slot will become available at site A first. But, maybe a grid slot becomes available at site B and my job is site A. Somebody else who comes along later actually gets to run their job first.

Today, the experiment team submits a skeleton job to all of the sites in order to detect which site becomes available first. Then, they pull down my job to this site. You have lots of schedulers involved in this -- in the experiment, the grid, and the site -- and we're looking at simplifying that.

We’re now looking at virtualizing the batch workers and dynamically reconfiguring them to meet the changing workload. This is essentially what Amazon does with EC2. When they don’t need the resources, they reconfigure them and sell the cycles to other people. This is how we want to work in virtualization and cloud with the grid, which knows where the data is.

... We’re definitely concentrating for the moment on how we exploit effective resources here. The wider benefits we'll have to discuss with our community.

Conway: CERN's scientists have earned multiple Nobel prizes over the years for their work in particle physics. CERN is where Tim Berners-Lee and his colleagues invented the World Wide Web in the 1980s.

More generally, CERN is a recognized world leader in technology innovation. What’s been driving this, as Tony said, are the massive volumes of data that CERN generates along with the need to make the data available to scientists, not only across Europe, but across the world.

For example, CERN has two major particle detectors. They're called CMS and ATLAS. ATLAS alone generates a petabyte of data per second, when it’s running. Not all that data needs to be distributed, but it gives you an idea of the scale or the challenge that CERN is working with.

In the case of CERN’s and Platform’s collaboration, the idea is not just to distribute the data but also the applications and the capability to run the scientific problem.

Showing a clear path to cloud

CERN is definitely a leader there, and cloud computing is really confined today to early adopters like CERN. Right now, cloud computing services constitute about $16 billion as a market.

IDC: By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending.



That’s just about four percent of mainstream IT spending. By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending. So, we predict it’s going to move along pretty quickly.

... [Being able to manage workloads in a dynamic environment] is the single biggest challenge we see for not only cloud computing, but it has affected the whole idea of managing these increasingly complex environments -- first clusters, then grids, and now clouds. Software has been at the center of that.

That’s one of the reasons we're here today with Platform and CERN, because that’s been Platform’s business from the beginning, creating software to manage clusters, then grids, and now clouds, first for very demanding, HPC sites like CERN and, more recently, also for enterprise clients.

Clark: Historically, clusters and grids have been relatively static, and the workloads have been managed across those. Now, with cloud, we have the ability to have a dynamic set of resources.

The trick is to marry and manage the workloads and the resources in conjunction with each other. Last year, we announced our cloud products -- Platform LSF and Platform ISF Adaptive Cluster -- to address that challenge and to help this evolution.

[Cloud adoption] is being driven by the top of the organization. Tony and Steve laid it out well. They look at the public/private cloud economically, and say, "Architecturally, what does this mean for our business?" Without any particular application in mind they're asking how to evolve to this new model. So, we're seeing it very horizontally in both enterprise and HPC applications.

What Platform sees is the interaction of distributed computing and new technologies like virtualization requiring management. What I mean by that is the ability, in a large farm or shared environment, to share resources and then make those resources dynamic. It's the ability to add virtualization into those on the resource side, and then, on the server side, to make it Internet accessible, have a service catalog, and move from providing IT support to truly IT as a competitive service.

The state of the art is that you can get the best of Amazon, ease of use, cost, accessibility with the enterprise configuration, scale, and dependability of the enterprise grid environment.

There isn't one particular technology or implementation that I would point to, to say "That is state of the art," but if you look across the installations we see in our installed base, you can see best practices in different dimensions with each of those customers.

Conway: People who have already stepped through the earlier stages of this evolution, who have gone from clusters to grid computing, are now for the most part contemplating the next move to cloud computing. It's an evolutionary move. It could have some revolutionary implications, but, from a technological standpoint, sometimes evolutionary is much safer and better than revolutionary.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Platform Computing.