Tuesday, September 1, 2009

XDAS standard aims to empower IT audit trails from across complex events, perhaps clouds

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: The Open Group.

Welcome to the latest BriefingsDirect podcast discussion, recorded at The Open Group’s 23rd Enterprise Architecture Practitioners Conference and the associated 3rd Security Practitioners Conference in Toronto.

We're going to take a look at an emerging updated standard called XDAS, which looks at audit trail information from a variety of systems and software across the enterprise IT environment.

This is an emerging standard that’s being orchestrated through The Open Group, but it’s an open-source standard that is hopefully going to help in compliance and regulatory issues and in improving automation of events across heterogeneous environments. This could be increasingly important, as we get deeper into virtualization and cloud computing.

Here to help us drill into XDAS (see a demo now), we're joined by Ian Dobson, director of the Security Forum for The Open Group, as well as Joël Winteregg, CEO and co-founder of NetGuardians. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Dobson: We actually got involved way back in '90s, in 1998, when we published the Distributed Audit Service (XDAS) Standard. It was, in many ways, ahead of its time, but it was a distributed audit services standard. Today’s audit and logging requirements are much more demanding than they were then. There is a heightened awareness of everything to do with audit and logging, and we see a need now to update it to meet today’s needs. So that’s why we've got involved now.

A key part of this is event reporting. Event reports have all sorts of formats today, but that makes them difficult to consume. Of course, we then generate events so that they can be consumed in useful ways. So, we're aiming the new audit standard from XDAS to be something that defines an interoperable event-reporting format, so that they can be consumed equally by everybody who needs to know.

The XDAS standard developers are well aware of, and closely involved in, the related Common Event Expression (CEE) standard development activity in Mitre. Mitre's CEE standard has a broader scope than XDAS, and XDAS will fit very well into the Event Reporting Format part of CEE.

We are therefore also participating in the CEE standard development to achieve this and more, so as to deliver to the audit and logging community an authoritative single open standard that they can adopt with confidence.

Winteregg: My company is working in the area of audit event management. We saw that it was a big issue to collect all these different audit trails from each different IT environment.

We saw that, if it was possible to have a single and standard way to represent all this information, that would be much easier and relevant for IT user and for a security officer to analyze all this information, in order to find out what the exact issues are, and to troubleshoot issue in the infrastructure, and so on. That’s a good basis for understanding what's going on the whole infrastructure in the company.

There is no uniform way to represent this information, and we thought that this initiative would be really good, because it will bring something uniform and universal that will help all the IT users to understand what is going on.

In distributed environments, it's really hard to track a transaction, because it starts on a specific component, then it goes through another one, and to a cloud. You don’t know exactly where everything is happening. So, the only way to track these transactions or to track the accountability in such an environment would be through some transaction identifiers, and so on.

For auditors or administrator, it is really costly to understand this information and use it

You will be able to track the who, the what, and the when in the whole IT infrastructure, which is really important these days . . .

in order to get relevant information for management to have metrics and to understand what's really happening on the IT infrastructure.

Audit information deals a lot with the accountability of the different transactions in an enterprise IT infrastructure. The real logs, which are modulated to develop strong meaning for debugging applications, may be providing the size of buffers or parameters of an application. Audit trails are much more business oriented. That means that you will have a lot of accountability information. You will be able to track the who, the what, and the when in the whole IT infrastructure, which is really important these days with all these different regulations, like Sarbanes-Oxley (SOX) and the others.

With a standard like XDAS, it will be much easier for a company to be in compliance with regulations, because there will be really clear and specific interfaces from all the different vendors to these generated audit trails.

The standard will be open, but there is a Java implementation of that standard called XDAS for J, which is a Java Library. This implementation is open source and business friendly. That means that you can use it in some proprietary software without having to then provide your software as an open-source software. So, it is available for business software too, and all the code is open. You can modify it, look at it, and so on. It’s on the Codehaus platform.

We're waiting for some feedback from vendors and users about how it is easy to use, how helpful it is, and if there are maybe some use cases -- if the scope is too wide, too narrow, etc. We're open to every comment about the current standard.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: The Open Group.

Monday, August 31, 2009

HP panel examines proper security hurdles on road to successful enterprise cloud computing adoption

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

The latest BriefingsDirect podcast focuses on exercising caution, overcoming fear, and the need for risk reduction on the road to successful cloud computing.

In order to ramp up cloud-computing use and practices, a number of potential security pitfalls need to be identified and mastered. Security, in general, takes on a different emphasis, as services are mixed and matched and come from a variety of internal and external sources.

So, will applying conventional security approaches and best practices be enough for low risk, high-reward cloud computing adoption? Is there such a significant cost and productivity benefit to cloud computing that being late or being unable to manage the risk means being overtaken by competitors that can do cloud successfully? More importantly, how do companies know whether they are prepared to begin adopting cloud practices without undo risks?

To help better understand the perils and promises of adopting cloud approaches securely, I recently moderated a panel of three security experts from Hewlett-Packard (HP), Archie Reed, HP Distinguished Technologist and Chief Technologist for Cloud Security; Tim Van Ash, director of software-as-a-service (SaaS) products at HP Software and Solutions, and David Spinks, security support expert at HP IT Outsourcing.

Here are some excerpts:
Van Ash: Anything associated with the Internet today tends to be described as cloud in an interchangeable way. There's huge confusion in the marketplace, in general, as to what cloud computing is, what benefits it represents, and how to unlock those benefits.

... The [cloud] provider is committing to providing a compute fabric, but they're not committing, for the most part, to provide security, although there are infrastructure as a service (IaaS) offerings emerging today that do wrap aspects of security in there.

You see more responsibility put on the provider in the [platform as a service (PaaS)] environment, but all the classic application security vulnerabilities, very much lie in the hands of the consumer or the customer who is building applications on the cloud platform.

With software-as-a-service (SaaS), more of the responsibility lies with the provider, because SaaS is really delivering capabilities or business processes from the cloud. But, there are a number of areas that the user is still responsible for, i.e., user management in ensuring that there are perfect security models in place, and that you're managing entry and exit of users, as they may enter a business or leave a business.

You're responsible for all the integration points that could introduce security vulnerabilities, and you're also responsible for the actual testing of those business processes to ensure that the configurations that you're using don't introduce potential vulnerabilities as well.

...Typically, what we see is that organizations often have concerns. They go through the fear, uncertainty, and doubt. They'll often put data out there in the cloud in a small department or team. The comfort level grows, and they start to put more information out there.

Reed: If you take the traditional IT department perspective of whether it's appropriate and valuable to use the cloud, and then you take the cloud security's perspective -- which is, "Are we trusting our provider as much as we need to? Are they able to provide within the scope of whatever service they're providing enough security?" -- then we start to see the comparisons between what a traditional IT department puts in play and what the provider offers.

For a small company, you generally find that the service providers who offer cloud services can generally offer -- not always, but generally -- a much more secure platform for small companies, because they staff up on IT security and they staff up on being able to respond to the customer requirements. They also stay ahead, because they see the trends on a much broader scale than a single company. So there are huge benefits for a small company.

But, if you're a large company, where you've got a very large IT department and a very large security practice inside, then you start to think about whether you can enforce firewalls and get down into very specific security implementations that perhaps the provider, the cloud provider, isn't able to do or won't be able to do, because of the model that they've chosen.

That's part of the decision process as to whether it's appropriate to put things into the cloud. Can the provider meet enough or the level of security that you're expecting from them?

Spinks: We've just been reviewing a large energy client's policies and procedures. ... As you move out into an outsourcing model, where we're managing their technology for them, there are some changes required in the policies and procedures. When you get to a cloud services model, some of those policies, procedures, and controls need to change quite radically.

Areas such as audit compliance, security assurance, forensic investigations, the whole concept of service-level agreements (SLAs) in terms of specifying how long things take have to change. Companies have to understand that they're buying a very standard service with standard terms and conditions.

Pressure to adopt

Van Ash: Obviously, the current economic environment is putting a lot of pressure on budgets, and people are looking at ways in which they can continue to move their projects forward on investments that are substantially reduced from what they were previously doing.

But, the other reason that people are looking at cloud computing is just agility, and both these aspects – cost and agility -- are being driven by the business. These two factors coming from the business are forcing IT to rethink how they look at security and how they approach security when it comes to cloud, because you're now in a position where many of your intellectual property and your physical data and information assets are no longer within your direct control.

So what are the capabilities that you need to mature in terms of governance, visibility, and audit controls that we were talking about, how do you ramp those up? How do you assess partners in those situations to be able to sit down and say that you can actually put trust into the cloud, so that you've got confidence that the assets you're putting in the cloud are safeguarded, and that you're not potentially threatening the overall organization to achieve quick wins?

The challenge is that the quick wins that the business is driving for could put the business at much longer-term risk, until we work out how to evolve our security practices across the board.

Spinks: ... The business units are pushing internally to get to use some cloud service that they've seen out there. A lot of companies are finding that their IT organizations are not responding fast enough such that business units are just going out there directly to a cloud services provider.

They're in a situation where the advice is, either ride the wave or get dumped. The business wants to utilize these environments -- the fast development testing and launch of new services, new software-related solutions, whatever they may be -- and cloud offers them an opportunity to do that quickly, at low cost, unlike the traditional IT processes.

Reed: ... What we need to do is take some of that traditional security-analysis approach, which ultimately we describe as just a basic risk analysis. We need to identify the value of this data -- what are the implications if it gets out and what's the value of the service -- and come back with a very simple risk equation that says, "Okay, this makes sense to go outside."

... There are certain things where you may say, "This data, in and of itself, is not important, should a breach occur. Therefore, I'm quite happy for it to go out into the cloud." ... Generally, when we talk to people, we come back to the risk equation, which includes, how much is that data worth ... and what is the value of the services being provided. That helps you understand what the security risk will be.

Next big areas

Spinks: The big areas that I believe will be developed over the next few years, in terms of ensuring we take advantage of these cloud services, are twofold. First, more sophisticated means in data classification. That's not just the conventional, restricted, confidential-type markings, but really understanding, as Archie said, the value of assets.

But, we need to be more dynamic about that, because, if we take a simple piece of data associated with the company's annual accounts and annual performance, prior to release of those figures, that data is some of the most sensitive data in an organization. However, once that report is published, that data is moved into the public domain and then should be unclassified.

We need not just management processes and data-classification processes, but these need to be much more responsive and proactive, rather than simply reacting to the latest security breach. As we move this forward, there will be an increased tension to more sophisticated risk management tools and risk-management methodologies and processes, in order to make sure that we take maximum advantage of cloud services.

Efforts under way

Reed: There are efforts under way. There are things, such as the Jericho Forum, which is now part of The Open Group. A group of CIOs and the like got together and said, "We need to deal with this and we need to have a way of understanding, communicating, and describing this to our constituents."

They created their definition of what cloud is and what some of the best practices are, but they didn't provide full guidelines on how, why, and when to use the cloud, that I would really call a standard.

There are other efforts that are put out by or are being worked on today by The National Institute of Standards and Technology, primarily focused on the U.S. public sector, but are generally available once they publish. But, again, that's something that's in progress.

The closest thing we've got, if we want to think about the security aspects of the cloud, are coming from the Cloud Security Alliance, a group that was formed by interested parties. HP supported founding this, and actually contributed to their initial guidelines.

... If we're looking for standards, they're still in the early days, they're still being worked on, and there are no, what I would call, formal standards that specifically address the cloud. So, my suggestion for companies is to take a look at the things that are under way and start to draw out what works for them, but also get involved in these sorts of things.

... We [at HP] also have a number of tools and processes based on standards initiatives, such as Information Security Service Management (ISSM) modeling tools, which incorporate inputs from standards such as the ISO 27001 and SAS 70 audit requirements -- things like the payment card industry (PCI), Sarbanes-Oxley (SOX), European Data Privacy, or any national or international data privacy requirements.

We put that into a model, which also takes inputs from the infrastructure that's being used, as well as input based on interviews with stakeholders to produce a current state and a desired or required state model. That will help our customers decide, from a security perspective at least, what do I need to move in what order, or what do I need to have in place?

That is all based on models, standards, and things that are out there, regardless of the fact that cloud security itself and the standards around it are still evolving as we speak.

Van Ash: We do provide a comprehensive set of consulting services to help organizations assess and model where they are, and build out roadmaps and plans to get them to where they want to be.

One of the offerings that we've launched recently is Cloud Assure. Cloud Assure is really designed to deal with the top three concerns the enterprise has in moving into the cloud.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Harnessing 'virtualization sprawl' requires managing an ecosystem of technologies, suppliers

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Better managing server virtualization expansion across enterprises has become essential if the benefits of virtualization are to be preserved and enhanced at scale. I recently had a chance to examine ways that IT organizations can adopt virtualization at deeper levels, or across more systems, data and applications -- but at lower risk.

As more enterprises use virtualization for more workloads to engender productivity from higher server utilization, we often see what can be called virtualization sprawl, spreading a mixture of hypervisors, which leads to complexity and management concerns.

In order to ramp up to more -- yet advantageous -- use of virtualization, pitfalls from heterogeneity need to be managed well. Yet, no one of the hypervisor suppliers is likely to deeply support any of the others.

So how do companies gain a top-down perspective of virtualization to encompass and manage the entire ecosystem, rather than just corralling the individual technologies? To better understand the risks of hypervisor sprawl and how to mitigate the pitfalls to preserve the economic benefits of virtualization, I recently interviewed Doug Strain, manager of Partner Virtualization Marketing at HP.

Here are some excepts:
Strain: Virtualization has been growing very steeply in the last few years anyway, but with the economy, the economic reasons for it are really changing. Initially, companies were using it to do consolidation. They continue to do that, but now the big deal with economy is the consolidation to lower cost -- not only capital cost, but also operating expenses.

... There’s a lot of underutilized capacity out there, and, particularly as companies are having more difficulty getting funding for more capital expenses, they’ve got to figure out how to maximize the utilizations they’ve already bought.

We’re seeing a little bit of a consolidation in the market, as we get to a handful of large players. Certainly, VMware has been early on in the market, has continued to grow, and has continued to add new capabilities. It's really the vendor to beat.

Of course, Microsoft is investing very heavily in this, and we’ve seen with Hyper-V, fairly good demand from the customers on that. And, with some of the things that Microsoft has already announced in their R2 version, they’re going to continue to catch up.

We’ve also got some players like Citrix, who really leverage their dominance in what’s called Presentation Server, now XenApp, market and use that as a great foot in the door for virtualization.

Strain: Because of the fact that all the major vendors now have free hypervisor capabilities, it becomes so easy to virtualize, number one, and so easy to add additional virtual machines, that it can be difficult to manage if technology organizations don’t do that in a planned way.

Most of the virtualization vendors do have management tools, but those tools are really optimized for their particular virtualization ecosystem. In some cases, there is some ability to reach out to heterogeneous virtualization, but it’s clear that that’s not a focus for most of the virtualization players. They want to really focus on their environment.

The other piece is that the hardware management is critical here. An example would be, if you’ve got a server that is having a problem, that could very well introduce downtime. You've got to have a way of navigating the virtual machine, so that those are moved off of the server.

That’s really an area where HP has really tried to invest in trying to pull all that together, being able to do the physical management with our Insight Control tools, and then tying that into the virtualization management with multiple vendors, using Insight Dynamics – VSE. ... We think that having tools that work consistently both in physical and in virtual environments, and allow you to easily transition between them is really important to customers.

There are a lot of ways that you can plan ahead on this, and be able to do this in a way that you don't have to pay a penalty later on.

Capacity assessment

It could be something as simple as doing a capacity assessment, a set of services that goes in and looks at what you’ve got today, how you can best use those resources, and how those can be transitioned. In most cases you’re going to want to have a set of tools like some of the ones I’ve talked about with Insight Control and Insight Dynamics VSE, so that you do have more control of the sprawl and, as you add new virtual machines, you do that in a more intelligent way.

We invest very heavily in certifying across the virtualization vendors, across the broadest range of server and storage platforms. What we’re finding is that we can’t say that one particular server or one particular storage is right for everybody. We’ve got to meet the broadest needs for the customers.

...Virtualization is certainly not the only answer or not the only component of data center transformation, but it is a substantial one. And, it's one that companies of almost any size can take advantage of, particularly now, where some of the requirements for extensive shared storage have decreased. It's really something that almost anybody who's got even one or two servers can take advantage of, all the way to the largest enterprises.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Open Group points standards at service-orientation architecture needs and approaches

This guest BriefingsDirect post comes courtesy of Heather Kreger, IBM’s lead architect for SOA Standards in the IBM Software Group.

By Heather Kreger

Last week The Open Group announced two new standards for architects; actually, more appro-priately, for service architects, SOA architects, and cloud architects. These standards are intended to help organizations more easily deploy service-based solutions rapidly and reliably especially in multi-vendor environments.

These standards are the first product in a family of standards being developed for architects by The Open Group’s SOA Work Group. Other standards currently in development for SOA include the SOA Ontology, SOA Reference Architecture, and Service Oriented Infrastructure.

Architecture standards are especially valuable for creating a common, shared language and understanding between service integrators, vendors and customers of all sizes. They provide a common foundation of understanding for the industry. Considering the who’s who of integrators involved in the development of these two new standards -- Capgemini, CGI and HP/EDS, and IBM -- we can expect the standards to reflect validated and mature best practices and industry experience.

[See a post by Sandy Kemsley from Heather's presentation at The Open Group's recent architecture conference in Toronto. Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

First, the Open Group Service Integration Maturity Model (OSIMM) provides a method to measure service adoption and integration and create roadmaps for incremental transformation to SOA to meet business objectives.

OSIMM provides a context to identify the business benefits of each step along the roadmap and progression toward the appropriate level of maturity for your business goals. The model consists of seven dimensions of consideration within an organization: Business View, Governance and Organization, Methods, Applications, Architecture, Information, and Infrastructure and Management.

Each of these dimensions can, in turn, be assessed on a maturity level scale from one to seven, including: 1: Silo (data integration); 2: Integrated (application integration); 3: Componentized (functional integration); 4: Simple services (process integration); 5: Composite services (supply-chain integration); 6: Virtualized services (virtual infrastructure); and 7: Dynamically reconfigurable services (eco-system integration).

OSIMM resonates with organizations because they can see at a glance what the entire scope of service use and SOA is and they can find themselves somewhere on that continuum. The model also makes it easy to see where they want to be on the continuum to meet objectives and to check on progress toward those goals. It’s important to note that with this maturity model, more is not necessarily better; few companies will need to be at level 7 maturity, most will satisfy their business objectives at level 4 and 5.

Second standard

The second standard, the SOA Governance Framework provides a methodology to help ensure that business objectives are in line with the SOA solutions and IT investment. The framework defines a SOA Governance Reference Model, which includes concepts that architects should understand in relation to governance, such as principles, guidelines, organizations, governed service and SOA processes, governing processes for compliance and dispensation, and supporting technologies.

For each of these concepts, the authors have provided starting points based on best practices. The framework defines the SOA Governance Vitality Method, which is an iterative cycle through the phases of Plan, Define, Implement and Monitor for the governance regimen. The monitor phase uses policies, checkpoints and triggers to ensure the governing processes are in place and being followed. These triggers can also be used to evaluate and adjust the governance regimen itself.

Actually, a great deal of the SOA Governance Framework applies to the governance of architecture in general but is explicitly defined to provide guidance for governing Service portfolios and SOA solution portfolios. Interestingly enough, the governance of service portfolios applies equally to business solutions that use cloud.

These two standards represent a major step forward in creating and simplifying the standards to build SOA. This is increasingly important as more organizations have to justify incremental investment in services. OSIMM helps you figure out where you want to go, and SOA governance ensures that you meet your objectives on the journey.

Heather Kreger is IBM’s lead architect for SOA Standards in the IBM Software Group, with 15 years of standards experience. She has led the development of standards for Web services, Management and Java in numerous industry standards groups including W3C, OASIS, DMTF, and The Open Group. Heather is the author of numerous articles and specifications, as well as the book “Java and JMX, Building Manageable Systems,” and most recently was co-editor of “Navigating the SOA Open Standards Landscape Around Architecture.”

Friday, August 28, 2009

Nimble business process management helps enterprises gain rapid productivity returns

Listen to the podcast. View a full transcript or download the transcript. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: BP Logix.

Welcome to a sponsored podcast discussion on the importance of business process management (BPM), especially for use across a variety of existing systems, in complex IT landscapes, and for building flexible business processes in dynamic environments.

The current economic climate has certainly highlighted how drastically businesses need to quickly adapt. Many organizations have had to adjust internally to new requirements and new budgets. They have also watched as their markets and supplier networks have shifted and become harder to predict.

To better understand how business processes can be developed and managed nimbly to help deal with such change, I recently moderated a panel of users, BPM providers, and analysts. Please join me in welcoming David A. Kelly, senior analyst at Upside Research; Joby O'Brien, vice president of development at BP Logix, and Jason Woodruff, project manager at TLT-Babcock.

Here are some excerpts:
Kelly: What's important is to be able to drive efficiency throughout an organization, and across all these business processes. With the economic challenges that organizations are facing, they've had to juggle suppliers, products, customers, ways to market, and ways to sell.

As they're doing that, they're looking at their existing business processes, trying to increase efficiencies, and they are trying to really make things more streamlined. ... Some organizations are even getting into cloud solutions and outside services that they need to integrate into their business processes. We've seen a real change in terms of how organizations are looking to manage these types of processes across applications, across data sources, across user populations.

... BPM solutions have been around for quite sometime now, and a lot of organizations have really put them to good use. But, over the past three or four years, we've seen this progression of organizations that are using BPM from a task-oriented solution to one that they have migrated into this infrastructure solution. ... [But] now with the changes and pressures that organizations are facing in the economy and their business cycles, we see organizations look for much more direct, shorter-term payback and ways to optimize business processes.

O'Brien: It's difficult for an organization, especially right now, to look at something on a one-, two-, or three-year plan. A lot of the BPM infrastructure products and a lot of the larger, more traditional ways that BPM vendors approach this reflect that type of plan. What we're seeing is that companies are looking for a quicker way to see a return on their BPM investment. What that means really is getting an implementation done and into production faster.

When there are particular business needs that are critical to an organization or business, those are the ones they tend try to address first. They are looking for ways to provide a solution that can be deployed rapidly. ... They take the processes that are most critical, and that are being driven by the business users and their needs, and address those with a one-at-a-time approach as they go through the organization.

It's very different than a more traditional approach, where you put all of the different requirements out there and spend six months going through discovery, design, and the different approaches. So, it's very different, but provides a rapid deployment of highly customized implementations.

Woodruff: TLT-Babcock is a supplier of air handling and material handling equipment, primarily in the utility and industrial markets. So, we have our hands in a lot of markets and lot of places.

As a project manager, ... I realized a need for streamlining our process. Right now, we don't want to ride the wave, but we want to drive the wave. We want to be proactive and we want to be the best out there. In order to do that, we need to improve our processes and continuously monitor and change them as needed.

After quite a bit of investigation and looking at different products, we developed and used a matrix that, first and foremost, looked at functionality. We need to do what we need to do. That requires flexibility and ultimately usability, not only from the implementation stage, but the end user stage, and to do so in the most cost-effective manner. That's where we are today.

We looked at why document control was an issue and what we could do to improve it. Then, we started looking at our processes and internal functions and realized that we needed a way to not just streamline them. One, we needed a way to define them better. Two, we needed to make sure that they are consistent and repetitive, which is basically automation.

O'Brien: There's one thing that Jason said that we think is particularly important. He used one phrase that's key to Nimble BPM. He used the term "monitor and change," and that is really critical. That means that I have deployed and am moving forward, but have the ability, with BP Logix Workflow Director, to monitor how things are going -- and then the ability to make changes based on the business requirements. This is really key to a Nimble BPM approach.

The approach of trying to get everybody to have a consensus, a six-month discovery, to go through all the different modeling, to put it down in stone, and then implement it works well in a lot of cases. But organizations that are trying to adapt very quickly and move into a more automated phase for the business processes need the ability to start quickly.

... The idea or the approach with the Nimble BPM is to allow folks like Jason -- and those within IT -- to be able to start quickly. They can put one together based on what the business users are indicating they need. They can then give them the tools and the ability to monitor things and make those changes, as they learn more.

In that approach, you can significantly compress that initial discovery phase. In a lot of the cases, you can actually turn that discovery phase into an automation phase, where, as part of that, you're going through the monitoring and the change, but you have already started at that point.

Woodruff: We saw this as an opportunity not just to implement a new product like Workflow Director, but to really reevaluate our processes and, in many cases, redefine them, sometimes gradually, other times quite drastically.

Our project cycle, from when we get an order to when our equipment is up and operating, can be two, three, sometimes four years. During that time there are many different processes from many different departments happening in parallel and serially as well. You name it -- it's all over the place. So, we started with that six-month discovery process, where we are trying to really get our hands around what do we do, why do we do it that way and what we should be doing.

As a result, we've defined some pretty complex business models and have begun developing. It’s been interesting that during that development of these longer-term, far-reaching implementations, the sort of spur-of-the-moment things have come up, been addressed, and been released, almost without realizing it.

A user will come and say they have a problem with this particular process. We can help. We'll sit down, find out what they need, create a form, model the workflow, and, within a couple of days, they're off and running. The feedback has been overwhelmingly positive.
Listen to the podcast. View a full transcript or download the transcript. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: BP Logix.

Thursday, August 27, 2009

New era enterprise architects need sweeping skills to straddle the IT-business alignment chasm

Listen to the podcast. Find it on iTunes/iPod and Podcast.com.View a full transcript or download the transcript. Sponsor: The Open Group.

Welcome to a special sponsored podcast discussion coming from The Open Group’s 23rd Enterprise Architecture Practitioners Conference in Toronto. This podcast, part of a series from the event, centers on the issue of the enterprise architect (EA) -- the role, the responsibilities, the certification, and skills -- both now and into the future.

The burgeoning impact of cloud computing, the down economy, and the interest in projecting more value from IT to the larger business is putting new requirements on the enterprise IT department. [See a related discussion on the effect of cloud computing on the architect role.]

So who takes on the mantle of grand overseer as IT expand its purview into more business processes and productivity issues? Who is responsible? Who can instrument these changes, and, in a sense, be a new kind of leader in the transition and transformation of IT and the enterprise?

To help us sort through who takes on the mantle of grand overseer as IT expand its purview, we're joined by James de Raeve, vice president of certification at The Open Group; Len Fehskens, vice president, Skills and Capabilities at The Open Group; David Foote, CEO and co-founder, as well as chief research officer, at Foote Partners, and Jason Uppal, chief architect at QRS. The discussion is moderated by me, BriefingsDirect's Dana Gardner.

Here are some excerpts:

Fehskens: One of the things that I've seen over my career in architecture is that the focus of architects has moved up the stack, so to speak. Initially the focus was on rationalizing infrastructure, looking for ways to reduce cost by removing redundancy and unneeded diversity. It's moved up through the middleware layer to the application layer to business process, and now people are saying, "Well, the place where we need to look for those kinds of benefits is now at the strategy level." That's inevitable.

The thing to understand, though, is that's it's not moving forward in a linear front across the entire industry. The rate of progress is locally defined, so to speak. So, different organizations will be at different points in that evolutionary path.

Uppal: As role of the architect starts to ascend in the organization ... it makes a lot of other professionals very nervous about what we do. In this day and age, you have to be very good at what you always did in the rationalization technology, but you also have to be very much almost a priest-like sensitive person, so that you don't trample on somebody's feelings.

You have to make sure that you don't trample somebody else along the way, because, without them, you're not going to go very far. Otherwise, they're going to throw a lot of stones along the way. So that's another a huge challenge that we have from skills of the architect ... having this soul that is sensitive to the other professions.

Foote: In the total group of enterprise architects I've met, every one of them was a great communicator. They were able to really make people feel comfortable around some very abstruse, very abstract, and, for people who are not technical, very technical concepts. They just could communicate. They could set people at ease. They were nonthreatening, and by the way, most of them, I think, were really close to genius.

Fehskens: One of the architects who I worked with on a fairly regular basis told me that the most satisfying moment in her career was when one of her clients told her, "You make me feel smart." That for me really encapsulated the communications goal for an architect -- to make points about these complex issues so clear that people understand them and feel comfortable with them.

Foote: People really don't know who enterprise architects are. ... [The average HR department person] thinks "architect" is a title that all people in IT want to have ... without really grabbing hold and defining the architect. They've let the IT organization simply hand out these titles to people as a way to attract them to the organization.

... That lack of control in HR is commonplace today. I tell HR organizations that ... You should have a representative to the HR organization that was selected by the CIO or the IT management there to represent them to HR. That person should be the person who advocates also for HR, so that they never are handed job descriptions that do not exist in the company. ... Mainly the lack of control is around job descriptions.

De Raeve: The other thing is that architects are, by their nature, extremely adaptive, and they redefine themselves to fit into where there are gaps in the organization where there are needs. They reshape themselves to address those needs. So, we're sort of like chameleons or shape-shifters, depending on what the organizational context is.

If you've got a whole bunch of people doing that, it's very hard to say, "You people are all basically performing the same role, because it will look different in some respect. See one person do it. It's even worse. So the only thing you could do is say, "Oh, shape-shifter, some kind of a magician."

... I think what you're asking for is the universally agreed professional framework for the enterprise architect, and I'll give it you the moment we have it. ... We're at an early stage in the maturity of this concept in the profession or in the industry.

... This was the very problem that we were given when developing our certification. We've got some documentation, which defines what those skills and experience levels are. You can look at that, if you're practicing architecture or you are in the architecture space. You could look at that stuff and say, "These are really good things that I ought to be drawing from as I work on my definitions of roles, or as I look at recruiting people or developing or promoting people." The certification is a separate piece of value.

So, we provide a lot of material that enables you to actually come to grips with what best practice things are, a set of core skills, competencies, and experiences that are needed by successful architects. In response to that, we developed our IT Architect Certification Program (ITAC) for the skills and experience, the ITAC Program, and we also have the TOGAF program, which is more about knowledge.

The community is crying out for it. They may not know that they're asking for it, but they're asking for it. One of my things is that I have to go and sell our certification programs to people. So I visit a number of different organizations and explain what we're doing and what it means.

So, we've got the two things: tools to enable organizations to start understanding what best practice is in the space, and then the certification program that allows people to communicate to their customers, their employers, and their next employer that they actually possess these skills and competencies.

Uppal: If we step outside of the IT industry, you'll see a lot of parallels of other professionals being developed, very similarly to how we develop architects. Architects are not this nebulous thing that just grows. They are developed.

Foote: There are definitely some activities in architecture that you can't outsource.

I've never met a recruiter who specialized in architects. I don't know that those recruiters exist. They probably don't, because there isn't a lot of demand on the outside for hiring architects.

... Most companies that we talk to say, "We like our architects. They've done very well, because we trust them. The business trusts them. We trust them. They are good channels of communication. They've opened up a lot of thought in our company. We'd really like three times more of these people. How do we accelerate the growth internally?"

They want to know how they can develop architects internally, because they know that they're not going to get that same quality. Now, these are people who are architecting out of that very delicate core competency, strategic level that you don't want to share with outsiders -- for a lot of reasons.

... I've never met a recruiter who specialized in architects. I don't know that those recruiters exist. They probably don't, because there isn't a lot of demand on the outside for hiring architects.

I do think the architects that I see that are brought in from outside are often consultants, formerly of Accenture, IBM, CSC, or one of the large houses. They are brought in basically to calm down the chatter, to educate, and train. They're there to cleanup a fire, to calm things down, get people on the same page, and then go. Sometimes, that's the best way to bring in an architect.

Fehskens: In a couple of conversations that I've had with people about where we seem to be evolving the role of enterprise architect, they have said basically, "Yeah, these people are going to become in-house management consultants and they're going to be better for that. They're going to know your business intimately, because they're going to have participated in strategic evolution over time."

There is a lot of merit in that analogy and a lot of similarity. I think the only difference is that what we're trying to do with EA is bring more of engineering rigor and engineering discipline to this domain and less of the touchy, feely, "do it because I think it's the right thing to do" kind of stuff -- not to disparage management consultants and the like.

Uppal: One of the big differences between management consultant and enterprise architects is that what you put on the table, you have to execute. The management consultant says, "You should do this, this, and this," and walks away. At the end of the day, if you, as an architect, put something on the table and you can't execute this thing, you have basically zero value. People are no longer buying management consultants at face value. They want you to execute.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com.View a full transcript or download the transcript. Sponsor: The Open Group.

Wednesday, August 26, 2009

DoD's Net-Centricity approach moves SOA into battle formation

This guest BriefingsDirect post comes courtesy of ZapThink. Jason Bloomberg is managing partner at ZapThink. You can reach him here.

By Jason Bloomberg

ZapThink recently conducted our Licensed ZapThink Architect Bootcamp course for a branch of the United States Department of Defense (DoD). As it happens, an increasing proportion of our US-based business is for the DoD, which is perfectly logical, given the strategic nature Service-Oriented Architecture (SOA) plays for the DoD.

SOA is so strategic, in fact, that SOA underlies how the DoD expects to achieve its mission in the 21st century -- namely, defending US interests by presenting the most powerful military presence on the globe. Furthermore, the story of how SOA became so strategic for the DoD provides insight into the power of SOA for all organizations, both in the public and private sector.

This story begins with the issue of complexity. The DoD, as you might imagine, is an organization of astounding complexity, perhaps the most complex organization in the world, save the US Federal Government itself, of which the DoD is indubitably the most complex part.

And with complexity comes vulnerability. As the sole remaining global superpower, the US's strength in battle, namely our overwhelming force, presents vulnerabilities to much smaller enemies. Traditional guerrilla tactics give small forces advantages over large ones, after all. Our 21st century adversaries understand full well the ancient principle of using an enemy's strengths against them. The DoD is rightly concerned that its sheer scale and complexity present weaknesses that today's terrorism-centric threats can take advantage of.

From the network to service orientation

Even before 9/11, there was an understanding that the core challenge that this complexity presented was one of information: who has it, how to share it, and how to rely upon it to make decisions -- in military parlance, Command and Control (C2). In response to this need, the DoD instituted a new strategic program, Network Centric Warfare, also known as Net-Centricity.

The idea for Network Centric Warfare arose during the late 1990s in response to the rise of the Internet. Its original concepts, therefore, were essentially "Web 1.0" in nature. It didn't take long, however, for DoD architects to realize that the network itself was only a piece of the puzzle, and it soon became clear that the challenges of Net-Centricity were as much organizational as technological. After all, Net-Centricity requires cooperation across the different branches of service -- a tall order for an organization as siloed as the DoD.

In fact, as the DoD and their contractors hammered out the details of Net-Centricity, it became increasingly clear that Net-Centricity required a broad, architectural approach to achieving agile information sharing in the context of a complex, siloed organization.

At that point, SOA entered the Net-Centricity picture, providing essential best practices for sharing information resources to support business process needs. In the military context, such business processes are operational processes, where the operation at hand might be fueling airplanes or deploying ground troops or spying on suspected terrorists with a satellite. When battlefield commanders say that they want the warfighting resources at their disposal to be available as needed to achieve their mission objectives, they are essentially requiring a Service-Oriented approach to Net-Centricity.

Information as a strategic military asset

Information has always been a part of warfare, since the stone age or even earlier. Essentially, the element of surprise boils down to one force having information the other does not, regardless of whether you're sneaking up on a foe with a club or leveraging satellite technology to precisely target an attack.

The same is true of Net-Centricity. Net-Centricity centers on supporting the military's C2 capabilities by ensuring the right information is in the right place at the right time. These three dimensions all create a path toward SOA ...
  • The right information: Commanders on the battlefield need all relevant information. It is essential to have access to relevant information from different forces, different locations, and different branches of service. Furthermore, commanders need a way to separate relevant information from the surrounding noise. And finally, they must ensure that the information is reliable.

  • In the right place: Today's warfare is an inherently distributed endeavor. Gone are the days where armies fight each other on single fields of battle. Today, commanders might call upon forces from hundreds of miles away, on land, at sea, in the air, or in space. Furthermore, the people who need the information might be anywhere. For example, a Navy ship may get the information it needs to target a missile from air support, satellite-based intelligence, and ground capabilities. The commander needs one view while the troops on the battlefield need another.

  • At the right time: Information is perishable. The more dynamic the purpose of that information, the more perishable it becomes. Knowing where your enemies are right now is far more valuable then where they were an hour or a day ago.
If you've been following ZapThink for any amount of time, you'll recognize these business drivers as being a recipe for SOA. It's no surprise, therefore that the Global Information Grid (GIG), a central Net-Centric capability, is inherently Service-Oriented. The GIG essentially consists of a set of Services that provide the underpinnings of the right information at the right place at the right time, as the figure below illustrates.
















There are a few features of the GIG worth noting. First, note how the core notion of a Service pervades the GIG. Every capability, from security to messaging to management, is represented as a Service. Secondly, keep in mind the global nature of the GIG. This is not a solitary data center; the GIG represents global IT capabilities across all branches of service for the entire DoD.

Today, the stakes for Net-Centricity couldn't be higher, because information itself proffers a new set of weapons, and even new battlefields. As a result, Net-Centricity focuses not only on leveraging shared IT capabilities to gain an advantage on both large and small opponents using traditional tactics, it also covers protecting our forces from information-based attacks as well as launching our own.

After all, if a small but smart opponent combines traditional guerrilla warfare with the information-centric guerrilla tactics we now call cyberwarfare, our vulnerabilities multiply. If a single opponent with an improvised explosive device can wound us, what about a single opponent with a means to interfere with our communications infrastructure?

The ZapThink take

There are lessons here for our readers both within the DoD as well as at other organizations, including those within the private sector, where the battles are economic. For DoD readers, it's important to recognize the importance of SOA to Net-Centricity, in particular how the architecture required to succeed with Net-Centricity is the true SOA that ZapThink talks about, where organizational transformation is a greater challenge than the technological issues that organizations face.

For other organizations, the lesson here is how to take a page out of the DoD's playbook. Net-Centricity is by no means the first example of how a DoD project led to broad commercial application; after all, the Internet itself is a case in point. In the DoD we have an organization with both a mind-boggling complexity problem and enormous resources, both financial and human, to assign to the problem. Sharing information across lines of business in a bank or manufacturer or power utility is child's play in comparison to getting the Army, Navy, Air Force, and Marines to share information effectively.

Furthermore, as ZapThink continues its work within the DoD, we can help act as a conduit for conveying the best practices of Net-Centricity to the private sector, as well as other government organizations. You'll see evidence of Net-Centric lessons learned in both our LZA Bootcamp as well as our new SOA & Cloud Governance course. The more complex your organization, the more a Net-Centric approach to achieving your strategic goals is a useful context for your SOA efforts, and ZapThink can help.

Finally, some organizations may find the concept of Net-Centricity to be a useful synonym for SOA. If you're having trouble explaining the benefits of SOA to a business audience, perhaps a discussion of Net-Centricity will help to shed the light on the approach you're recommending.

After all, not only does Net-Centricity focus on effective information sharing in a complex environment, it also distills the urgency and importance of the military context, where the enemy is literally trying to kill us.

Competition in the marketplace may not be a literal life-or-death battle, but leveraging best practice approaches to fighting such battles that treat them as though they were truly about survival is an attitude that any seasoned business stakeholder can take to heart.

This guest post comes courtesy of ZapThink. Jason Bloomberg is managing partner at ZapThink. You can reach him here.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Tuesday, August 25, 2009

Cloud computing uniquely enables product and food recall processes across supply chains

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

This week brought an excellent example of how cloud-based services can meet business goals better than traditional IT and process management approaches.

In conjunction with GS1 Canada, HP announced a product recall process Monday that straddles many participants across global supply chains. The pressures in such multi-player process ecologies can mount past the breaking point for such change management nightmares as rapid food or product recalls.

You may remember recent food recalls that hurt customers, sellers, suppliers and manufacturers -- potentially irreparably. There have been similar issues with products or public health outbreaks. The only way to protect users is to identify the risks, warn the communities and public, and remove the hazards. It demands a tremendous amount of coordination and adjustment, often without an initial control source or authority.

The keys to making such recalls effective is traceability, visibility and collaboration across many organization boundaries. Traditional "one step up, one step down" methods -- the norm today in addressing the tracing of any product -- has its limitations in providing required visibility into products across their lifecycle. Without viable information about how food or products get to market, you can't get them out.

Hence, developing an accurate, single picture of the "life story of a product" is something the industry and the consumers have struggled with continuously, according to Mick Keyes, Senior Architect in HP's CTO's Office. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

That "life story of a product" became the nexus of the initiative to create a "cloud traceability platform," which arrived Monday. The GS1 Canada Product Recall service runs on the HP cloud computing platform for manufacturing to provide users with secure, real-time access to product information so that recalled products are fully traced and promptly removed from the supply chain.

This enables more accurate targeting of recall products. Security enhancements help make sure that only authorized recalls are issued and that only targeted retailers receive notifications. HP will be creating a number of additional specific services that leverage cloud computing to meet specific industry need in other sectors, such as hospitality and retail.

I recently moderated a sponsored podcast discussion on the fast-evolving implications that cloud computing has on companies in industries like manufacturing. The goal is not to define cloud by what it is, but rather by what it can do, and to explore what cloud solutions can provide to manufacturing and other industries.

In addition to Keyes, I was joined in the discussion by Christian Verstraete, Chief Technology Officer for Manufacturing & Distribution Industries Worldwide at HP, and Bernd Roessler, marketing manager for Manufacturing Industries at HP

Here are some excerpts:
Keyes: In the whole area of recall, we're looking at value-add services that we will offer to regulatory bodies, other industry groups, and governments, so they can have a visibility into what's happening in real-time.

This is something that's been missing in the industry up to today. What we're offering is a centralized offering, a hub, where any of the entities in the supply chain or nodes in the supply chain -- be they manufacturers, be they transportation networks, retailers, or consumers -- can use the cloud as a mechanism from which they will be able to gain information on whether our product is recalled or not.

In the last few years, we've seen a large number of recalls across the world, which hit industry fairly heavily. But also, from a consumer point of view or visibility into where the food comes from, this can be extended to other product areas. It improves consumer confidence in products that they purchase.

It's not just in the food area. We also see it expanding into areas such as healthcare and the whole pharmaceutical area as well. We're looking at the whole idea of how you profile people in the cloud itself. We're looking at how next generation devices, edge of the network devices as well, will also feed information from anywhere in the world into the profile that you may have in the cloud itself.

We're taking data from many disparate types of sources -- be it the food you actually eat, be it your health environment, be it your life cycle -- and be able to come with up cloud based offerings to offer a variety of different services to consumers. It's a real extension to what industry is doing.

Roessler: Cloud services to consumers are distinct, different things, compared to cloud services in the enterprise. From an industry vertical perspective, I think we need to have a particular look at what is different in providing cloud services for enterprises. ... Some dimensions of cloud are changing business behavior of companies.

Number one is that everybody likes to live up to the promise of saving costs by introducing cloud services to enterprises and their value chains. Nevertheless, compared to consumer services like free e-mail, the situation in enterprises is dramatically different, because we have a different cost structure, as we need not only talk about the cost of transaction.

In the enterprise, we also need to think about, privacy, storage, and archiving information, because that is the context under which cloud services for enterprises live.

The second dimension, which is different, is the management of intellectual property and confidentiality in the enterprise environment. Here it is necessary that cloud services, which are designed for industry usage, are capturing data. At the moment, everybody is trying to make sure that critical enterprise information in IT is secured and stays where it should stay. That definitely imposes a critical functionality requirement to any cloud service, which might contradict the need for creating this, "everybody can access anywhere," vision of a cloud service.

Last but not least, it is important that we're able to scale those services according to the requirement of the function and the services this cloud environment should provide. This is imposing quite a few requirements on the technical infrastructure. You need to have compute power, which you can inject into the market, whenever you need it.

You need to be able to scale up very much on the dependencies, however. And, coming back to the promise of the cost savings, if you're not combining this technology infrastructure scalability with the dimension of automation, then cloud services for enterprises will not deliver the cost savings expected. These are the kinds of environments and dimensions any cloud provisioning, particularly in enterprises, need to work against.

Verstraete: By using cloud services and by changing the approach that is provided to the customer, at the same time you do a very good thing from an environmental perspective. You suddenly start seeing that cloud is adding value in different ways, depending on how you use it. As you said earlier, it allows you to do things that you could not do before, and that's an important point.

Gain a good understanding of what the cloud is and then really start thinking about where the cloud could really add value to their enterprise. One of the things that we announced last week is a workshop that helps them to do that – The HP Cloud Discovery Workshop -- that involves sitting down with our customers and working with them, trying to first explain cloud to them, having them gain a good understanding of what a cloud really is, and then looking with them at where it can really start adding value to them.

Once they’ve done that, they can then start building a roadmap of how they will start experimenting with the cloud, how they will learn from implementing the cloud. They can then move and grow their capabilities in that space, as they grow those new services, as they grow those new capabilities, as they build a trust that we talked about earlier.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.