Wednesday, August 26, 2009

DoD's Net-Centricity approach moves SOA into battle formation

This guest BriefingsDirect post comes courtesy of ZapThink. Jason Bloomberg is managing partner at ZapThink. You can reach him here.

By Jason Bloomberg

ZapThink recently conducted our Licensed ZapThink Architect Bootcamp course for a branch of the United States Department of Defense (DoD). As it happens, an increasing proportion of our US-based business is for the DoD, which is perfectly logical, given the strategic nature Service-Oriented Architecture (SOA) plays for the DoD.

SOA is so strategic, in fact, that SOA underlies how the DoD expects to achieve its mission in the 21st century -- namely, defending US interests by presenting the most powerful military presence on the globe. Furthermore, the story of how SOA became so strategic for the DoD provides insight into the power of SOA for all organizations, both in the public and private sector.

This story begins with the issue of complexity. The DoD, as you might imagine, is an organization of astounding complexity, perhaps the most complex organization in the world, save the US Federal Government itself, of which the DoD is indubitably the most complex part.

And with complexity comes vulnerability. As the sole remaining global superpower, the US's strength in battle, namely our overwhelming force, presents vulnerabilities to much smaller enemies. Traditional guerrilla tactics give small forces advantages over large ones, after all. Our 21st century adversaries understand full well the ancient principle of using an enemy's strengths against them. The DoD is rightly concerned that its sheer scale and complexity present weaknesses that today's terrorism-centric threats can take advantage of.

From the network to service orientation

Even before 9/11, there was an understanding that the core challenge that this complexity presented was one of information: who has it, how to share it, and how to rely upon it to make decisions -- in military parlance, Command and Control (C2). In response to this need, the DoD instituted a new strategic program, Network Centric Warfare, also known as Net-Centricity.

The idea for Network Centric Warfare arose during the late 1990s in response to the rise of the Internet. Its original concepts, therefore, were essentially "Web 1.0" in nature. It didn't take long, however, for DoD architects to realize that the network itself was only a piece of the puzzle, and it soon became clear that the challenges of Net-Centricity were as much organizational as technological. After all, Net-Centricity requires cooperation across the different branches of service -- a tall order for an organization as siloed as the DoD.

In fact, as the DoD and their contractors hammered out the details of Net-Centricity, it became increasingly clear that Net-Centricity required a broad, architectural approach to achieving agile information sharing in the context of a complex, siloed organization.

At that point, SOA entered the Net-Centricity picture, providing essential best practices for sharing information resources to support business process needs. In the military context, such business processes are operational processes, where the operation at hand might be fueling airplanes or deploying ground troops or spying on suspected terrorists with a satellite. When battlefield commanders say that they want the warfighting resources at their disposal to be available as needed to achieve their mission objectives, they are essentially requiring a Service-Oriented approach to Net-Centricity.

Information as a strategic military asset

Information has always been a part of warfare, since the stone age or even earlier. Essentially, the element of surprise boils down to one force having information the other does not, regardless of whether you're sneaking up on a foe with a club or leveraging satellite technology to precisely target an attack.

The same is true of Net-Centricity. Net-Centricity centers on supporting the military's C2 capabilities by ensuring the right information is in the right place at the right time. These three dimensions all create a path toward SOA ...
  • The right information: Commanders on the battlefield need all relevant information. It is essential to have access to relevant information from different forces, different locations, and different branches of service. Furthermore, commanders need a way to separate relevant information from the surrounding noise. And finally, they must ensure that the information is reliable.

  • In the right place: Today's warfare is an inherently distributed endeavor. Gone are the days where armies fight each other on single fields of battle. Today, commanders might call upon forces from hundreds of miles away, on land, at sea, in the air, or in space. Furthermore, the people who need the information might be anywhere. For example, a Navy ship may get the information it needs to target a missile from air support, satellite-based intelligence, and ground capabilities. The commander needs one view while the troops on the battlefield need another.

  • At the right time: Information is perishable. The more dynamic the purpose of that information, the more perishable it becomes. Knowing where your enemies are right now is far more valuable then where they were an hour or a day ago.
If you've been following ZapThink for any amount of time, you'll recognize these business drivers as being a recipe for SOA. It's no surprise, therefore that the Global Information Grid (GIG), a central Net-Centric capability, is inherently Service-Oriented. The GIG essentially consists of a set of Services that provide the underpinnings of the right information at the right place at the right time, as the figure below illustrates.
















There are a few features of the GIG worth noting. First, note how the core notion of a Service pervades the GIG. Every capability, from security to messaging to management, is represented as a Service. Secondly, keep in mind the global nature of the GIG. This is not a solitary data center; the GIG represents global IT capabilities across all branches of service for the entire DoD.

Today, the stakes for Net-Centricity couldn't be higher, because information itself proffers a new set of weapons, and even new battlefields. As a result, Net-Centricity focuses not only on leveraging shared IT capabilities to gain an advantage on both large and small opponents using traditional tactics, it also covers protecting our forces from information-based attacks as well as launching our own.

After all, if a small but smart opponent combines traditional guerrilla warfare with the information-centric guerrilla tactics we now call cyberwarfare, our vulnerabilities multiply. If a single opponent with an improvised explosive device can wound us, what about a single opponent with a means to interfere with our communications infrastructure?

The ZapThink take

There are lessons here for our readers both within the DoD as well as at other organizations, including those within the private sector, where the battles are economic. For DoD readers, it's important to recognize the importance of SOA to Net-Centricity, in particular how the architecture required to succeed with Net-Centricity is the true SOA that ZapThink talks about, where organizational transformation is a greater challenge than the technological issues that organizations face.

For other organizations, the lesson here is how to take a page out of the DoD's playbook. Net-Centricity is by no means the first example of how a DoD project led to broad commercial application; after all, the Internet itself is a case in point. In the DoD we have an organization with both a mind-boggling complexity problem and enormous resources, both financial and human, to assign to the problem. Sharing information across lines of business in a bank or manufacturer or power utility is child's play in comparison to getting the Army, Navy, Air Force, and Marines to share information effectively.

Furthermore, as ZapThink continues its work within the DoD, we can help act as a conduit for conveying the best practices of Net-Centricity to the private sector, as well as other government organizations. You'll see evidence of Net-Centric lessons learned in both our LZA Bootcamp as well as our new SOA & Cloud Governance course. The more complex your organization, the more a Net-Centric approach to achieving your strategic goals is a useful context for your SOA efforts, and ZapThink can help.

Finally, some organizations may find the concept of Net-Centricity to be a useful synonym for SOA. If you're having trouble explaining the benefits of SOA to a business audience, perhaps a discussion of Net-Centricity will help to shed the light on the approach you're recommending.

After all, not only does Net-Centricity focus on effective information sharing in a complex environment, it also distills the urgency and importance of the military context, where the enemy is literally trying to kill us.

Competition in the marketplace may not be a literal life-or-death battle, but leveraging best practice approaches to fighting such battles that treat them as though they were truly about survival is an attitude that any seasoned business stakeholder can take to heart.

This guest post comes courtesy of ZapThink. Jason Bloomberg is managing partner at ZapThink. You can reach him here.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Tuesday, August 25, 2009

Cloud computing uniquely enables product and food recall processes across supply chains

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

This week brought an excellent example of how cloud-based services can meet business goals better than traditional IT and process management approaches.

In conjunction with GS1 Canada, HP announced a product recall process Monday that straddles many participants across global supply chains. The pressures in such multi-player process ecologies can mount past the breaking point for such change management nightmares as rapid food or product recalls.

You may remember recent food recalls that hurt customers, sellers, suppliers and manufacturers -- potentially irreparably. There have been similar issues with products or public health outbreaks. The only way to protect users is to identify the risks, warn the communities and public, and remove the hazards. It demands a tremendous amount of coordination and adjustment, often without an initial control source or authority.

The keys to making such recalls effective is traceability, visibility and collaboration across many organization boundaries. Traditional "one step up, one step down" methods -- the norm today in addressing the tracing of any product -- has its limitations in providing required visibility into products across their lifecycle. Without viable information about how food or products get to market, you can't get them out.

Hence, developing an accurate, single picture of the "life story of a product" is something the industry and the consumers have struggled with continuously, according to Mick Keyes, Senior Architect in HP's CTO's Office. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

That "life story of a product" became the nexus of the initiative to create a "cloud traceability platform," which arrived Monday. The GS1 Canada Product Recall service runs on the HP cloud computing platform for manufacturing to provide users with secure, real-time access to product information so that recalled products are fully traced and promptly removed from the supply chain.

This enables more accurate targeting of recall products. Security enhancements help make sure that only authorized recalls are issued and that only targeted retailers receive notifications. HP will be creating a number of additional specific services that leverage cloud computing to meet specific industry need in other sectors, such as hospitality and retail.

I recently moderated a sponsored podcast discussion on the fast-evolving implications that cloud computing has on companies in industries like manufacturing. The goal is not to define cloud by what it is, but rather by what it can do, and to explore what cloud solutions can provide to manufacturing and other industries.

In addition to Keyes, I was joined in the discussion by Christian Verstraete, Chief Technology Officer for Manufacturing & Distribution Industries Worldwide at HP, and Bernd Roessler, marketing manager for Manufacturing Industries at HP

Here are some excerpts:
Keyes: In the whole area of recall, we're looking at value-add services that we will offer to regulatory bodies, other industry groups, and governments, so they can have a visibility into what's happening in real-time.

This is something that's been missing in the industry up to today. What we're offering is a centralized offering, a hub, where any of the entities in the supply chain or nodes in the supply chain -- be they manufacturers, be they transportation networks, retailers, or consumers -- can use the cloud as a mechanism from which they will be able to gain information on whether our product is recalled or not.

In the last few years, we've seen a large number of recalls across the world, which hit industry fairly heavily. But also, from a consumer point of view or visibility into where the food comes from, this can be extended to other product areas. It improves consumer confidence in products that they purchase.

It's not just in the food area. We also see it expanding into areas such as healthcare and the whole pharmaceutical area as well. We're looking at the whole idea of how you profile people in the cloud itself. We're looking at how next generation devices, edge of the network devices as well, will also feed information from anywhere in the world into the profile that you may have in the cloud itself.

We're taking data from many disparate types of sources -- be it the food you actually eat, be it your health environment, be it your life cycle -- and be able to come with up cloud based offerings to offer a variety of different services to consumers. It's a real extension to what industry is doing.

Roessler: Cloud services to consumers are distinct, different things, compared to cloud services in the enterprise. From an industry vertical perspective, I think we need to have a particular look at what is different in providing cloud services for enterprises. ... Some dimensions of cloud are changing business behavior of companies.

Number one is that everybody likes to live up to the promise of saving costs by introducing cloud services to enterprises and their value chains. Nevertheless, compared to consumer services like free e-mail, the situation in enterprises is dramatically different, because we have a different cost structure, as we need not only talk about the cost of transaction.

In the enterprise, we also need to think about, privacy, storage, and archiving information, because that is the context under which cloud services for enterprises live.

The second dimension, which is different, is the management of intellectual property and confidentiality in the enterprise environment. Here it is necessary that cloud services, which are designed for industry usage, are capturing data. At the moment, everybody is trying to make sure that critical enterprise information in IT is secured and stays where it should stay. That definitely imposes a critical functionality requirement to any cloud service, which might contradict the need for creating this, "everybody can access anywhere," vision of a cloud service.

Last but not least, it is important that we're able to scale those services according to the requirement of the function and the services this cloud environment should provide. This is imposing quite a few requirements on the technical infrastructure. You need to have compute power, which you can inject into the market, whenever you need it.

You need to be able to scale up very much on the dependencies, however. And, coming back to the promise of the cost savings, if you're not combining this technology infrastructure scalability with the dimension of automation, then cloud services for enterprises will not deliver the cost savings expected. These are the kinds of environments and dimensions any cloud provisioning, particularly in enterprises, need to work against.

Verstraete: By using cloud services and by changing the approach that is provided to the customer, at the same time you do a very good thing from an environmental perspective. You suddenly start seeing that cloud is adding value in different ways, depending on how you use it. As you said earlier, it allows you to do things that you could not do before, and that's an important point.

Gain a good understanding of what the cloud is and then really start thinking about where the cloud could really add value to their enterprise. One of the things that we announced last week is a workshop that helps them to do that – The HP Cloud Discovery Workshop -- that involves sitting down with our customers and working with them, trying to first explain cloud to them, having them gain a good understanding of what a cloud really is, and then looking with them at where it can really start adding value to them.

Once they’ve done that, they can then start building a roadmap of how they will start experimenting with the cloud, how they will learn from implementing the cloud. They can then move and grow their capabilities in that space, as they grow those new services, as they grow those new capabilities, as they build a trust that we talked about earlier.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Monday, August 24, 2009

IT and log search as SaaS gains operators fast, affordable and deep access to system behaviors

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Paglo.

Automatically discover your IT data and make it accessible and useful. Get started for free.

Complexity of data centers escalates. Managed service providers face daunting performance obligations. And the budget to support the operations of these critical endeavors suffers downward pressure.

In this podcast, we explore how IT search and systems log management as a service provides low-cost IT analytics that harness complexity to improve performance at radically reduced costs. We'll examine how network management, systems analytics, and log search come together, so that IT operators can gain easy access to identify and fix problems deep inside complex distributed environments.

Here to help better understand how systems log management and search work together are Dr. Chris Waters, co-founder and chief technology officer at Paglo, and Jignesh Ruparel, system engineer at Infobond, a value-added reseller (VAR). The discussion is moderated by me, BriefingsDirect's Dana Gardner.

Here are some excerpts:
Waters: [Today] there’s just more information flowing, and more information about the IT environment. Search is a great technology for quickly drilling through a lot of noise to get to the exact piece of data that you want, as more and more data flows at you as an IT professional.

One of the other challenges is the distribution of these applications across increasingly distributed companies and applications that are now running out of remote data centers and out of the cloud as well.

When you're trying to monitor applications outside of a data center, you can no longer use software systems that you have installed on your local premises. You have to have something that can reach into that data center. That’s where being able to deliver your IT solution as software-as-a-service (SaaS) or a cloud-based application itself is really important.

You've got this heterogeneity in your IT environments, where you want to bring together solutions from traditional software vendors like Microsoft and cloud providers like Amazon, with their EC2, and it allows you to run things out of the cloud, along with software from open-source providers.

All of the software in these systems and this hardware is generating completely disparate types of information. Being able to pull all that together and use an engine that can suck up all that data in there and help you quickly get to answers is really the only way to be able to have a single system that gives you visibility across every aspect of your IT environment.

And "inventory" here means not just the computers connected to the network, but the structure of the network itself -- the users, the groups that they belong to, and, of course, all of the software and systems that are running on all those machines.

Search allows us to take information from every aspect of IT, from the log files that you have mentioned, but also from information about the structure of the network, the operation of the machines on the network, information about all the users, and every aspect of IT.

We put that into a search index, and then use a familiar paradigm, just as you'd search with Google. You can search in Paglo to find information about the particular error messages, or information about particular machines, or find which machines have certain software installed on them.

We deliver the solution as a SaaS offering. This means that you get to take advantage of our expertise in running our software on our service, and you get to leverage the power of our data centers for the storage and constant monitoring of the IT system itself.

The [open source] Paglo Crawler is a small piece of software that you download and install onto one server in your network. From that one server, the Paglo Crawler then discovers the structure of the rest of the network and all the other computers connected to that network. It logs onto those computers and gathers rich information about the software and operating environment.

That information is then securely sent to the Paglo data center, where it's indexed and stored on the search index. You can then log in to the Paglo service with your Web browser from anywhere in your office, from your iPhone, or from your home and gain visibility into what's happening in real time in the IT environment.

This allows people who are responsible for networks, servers, and workstations to focus on their expertise, which is not maintaining the IT management system, but maintaining those networks, servers, and workstations.

The Crawler needs some access to what’s going on in the network, but any credentials that you provide to the Crawler to log in never leaves the network itself. That’s why we have a piece of software that sits inside the network. So, there are no special firewall holes that need to be opened or compromised in the security with that.

There is another aspect, which is very counterintuitive, and that people don't expect when they think about SaaS. Here at Paglo, we are focused on one thing, which is securely and reliably operating the Paglo service. So, the expertise that we put into those two things is much more focused than you would expect within an IT department, where you are focused on solving many, many different challenges.

Ruparel: For 15 years, we [at Infobond] have been primarily a break-fix organization, moving into managed services, monitoring services. We needed visibility into the networks of the customers we service. For that we needed a tool that would be compatible with the various protocols that are out there to manage the networks -- namely SNMP, WMI, Syslog. We needed to have all of them go into a tool and be able to quickly search for various things.

We found that the technology that Paglo is using is very, very advanced. They aggregate the information and make it very easy for you to search.

You can very quickly create customized dashboards and customized reports based on that data for the end customer, thus providing more of a personal and customized approach to the monitoring for the customers.

Some of the dashboards are a common denominator to various sorts of customers. An example would be a Microsoft Exchange dashboard. Customers would love to have a dashboard that they have on the screen. At the end of the day, I look at it very simply as collecting information in one place, and then being able to extract that easily for various situations and environments.

These are some things that are a common denominator to almost all customers that are moving with the technology, implementing new technologies, such as VMware, the latest Exchange versions, Linux environments for development, and Windows for their end users.

The number of pieces of software and the number of technologies that IT implements is far more than it used to be, and it’s going to get more and more complex as time progresses. With that, you need something like Paglo, where it pulls all the information in one place, and then you can create customized uses for the end customers.

If I go and set things up without Paglo, it would require me to place a server at the customer site. We would have to worry about not only maintenance of the hardware, but the maintenance of the software at the customer site as well, and we would have to do all of this effort.

We would then have to make sure that our systems that those servers communicate to are also maintained and steady 24/7. We would have multiple data centers, where we can get support. In case one data center dies, we have another one that takes over. All of that infrastructure cost would be used as an MSP.

At the end of the day, I look at it very simply as collecting information in one place, and then being able to extract that easily for various situations and environments.

Now, if you were to look at it from a customer's perspective, it's the same situation. You have a software piece that you install on a server. You would probably need a person dedicated for approximately two to three months to get the information into the system and presentable to the point where its useful. With Paglo, I can do that within four hours.

Waters: We have a lot of users who are from small and medium-sized businesses. We also see departments within some very large enterprises, as well, using Paglo, and often that's for managing not just on-premise equipment, but also managing equipment out of their own data centers.

Paglo is ideal for managing data-center environments, because, in that case, the IT people and the hardware are already remote from each other. So, the benefits of SaaS are double there. We also see a lot of MSPs and IT consultants who use Paglo to deliver their own service to their users.

Ruparel: As far as cost is concerned, right now Paglo charges a $1.00 a device. That is unheard of in the industry right now. The cheapest that I have gotten from other vendors, where you would install a big piece of hardware and the software that goes along with it, and the cost associated with that per device is approximately $4-5, and not delivering a central source of information that is accessible from anywhere.

As far as cost, infrastructure cost wise, we save a ton of money. Manpower wise, the number of hours that I have to have engineers working on it, we save tons of time. Number three, after all of that, what I pay to Paglo is still a lot less than it would cost me.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Paglo.

Automatically discover your IT data and make it accessible and useful. Get started for free.

Sunday, August 23, 2009

ITIL 3 leads way in helping IT transform into mature business units amid the 'reset economy'

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript, or download the transcript. Learn more. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Running IT departments as mature business units has clearly become more pressing. Recessionary budget pressures and the need to compare existing IT costs to newer options and investments means IT and business leaders need to understand how IT operates from an IT services management (ITSM) perspective.

The "reset economy" has moved the business and operations maturity process of IT from "nice to have" to "must have," if costs are going to be cut without undermining operational integrity. IT financial management (ITFM) must be pervasive and transparent if true costs are to be compared to alternative sourcing options like data center modernization, SaaS, virtualization, and cloud computing models.

Fortunately, there is a template and tried-and-true advise on moving IT operations to such business unit maturity. The standards and methods around ITIL Version 3 provide a pattern for better IT efficiency, operational accountability and ITSM. Yet there are some common misunderstandings about ITIL and how it can be best used.

To help unlock the secrets behind ITIL 3, and to debunk some confusion about ITIL v3, I recently gathered three experts on ITIL for a sponsored podcast discussion on how IT leaders can best leverage ITSM.

Please welcome David Cannon, co-author of the Service Operation Book for the latest version of ITIL, and an ITSM practice principal at HP; Stuart Rance, service management expert at HP, as well as co-author of ITIL Version 3 Glossary; and Ashley Hanna, business development manager at HP and also a co-author of ITIL Version 3 Glossary.

Here are some excerpts of our discussion:

Cannon: IT needs to save costs. In fact, the business puts a lot of pressure on IT to bring their costs down. But, in this economy, what we're seeing is that IT plays a way more important role than simply saving money.

Business has to change the way in which it works. It has to come up with different services. The business has to reduce its cost. It has to find better ways of doing business. It has to consolidate services. In every single one of those decisions, IT is going to play an instrumental role. The way in which the business moves itself toward the current economy has to be supported by the way in which IT works.

Now, if there is no linkage between the business and the way in which IT is managed, then it's going to be really, really difficult for the business to get that kind of value out of IT. So, ITSM provides a way in which IT and the business can communicate and design new ways of doing business in the current economy.

IT is going to drive these changes to the business. What we're seeing in the reset is that businesses have to change their operating models.

Part of an operating model within any business is their IT environments and the way in which IT works and is integrated into the business processes and services. So, when we talk about a reset, what we're really talking about is just a re-gearing of the operating models in the business -- and that includes IT.

Rance: A lot of people don't really get what we're talking about, when we talk about service management.

The point is that there are lots of different service providers out there offering services. Everybody has some kind of competition, whether it's internal, a sort of outsourcing, or alternate ways of providing service.

All of those service providers have access to the same sorts of resources. They can all buy the same servers, network components, and software licenses, and they can all build data centers of the same standards. So, the difference between service providers isn't in what those resources they bring to bear. They are all the same.

Service management is the capabilities a service provider brings in order to deploy, control, and manage those resources to create some value for their customers. It includes things about all of your processes for managing changes, incidents, or the organizational designs and their roles and responsibilities, and lots of different things that you develop over time as an organization, it's how you create value from your resources and distinguish yourself from alternate service providers.

... What I've seen recently is that organizations that have already achieved a level of ITSM maturity are really building on that now to improve their efficiency, their effectiveness, and their cost-effectiveness.

Maybe a year or two years ago, other organizations that were less mature and a bit less effective were managing to keep up, because things weren't so tight and there was plenty of fat left. What I'm seeing now is that those organizations that implemented ITSM are getting further and further ahead of their competition.

For organizations that are not managing their IT services effectively toward the end of the slump, it's going to be really difficult. Some organizations will start to grow fast and pick up business, and they are going to carry on shrinking.

Hanna: If ITIL has been implemented correctly, then it is not an overhead. As times get tough, it's not something you turn off. It becomes part of what you do day-to-day, and you gain those improvements and efficiencies over time. You don't need to stop doing it. In fact, it's just part of what you do.

... We've gone from managing technology processes, which was certainly an improvement, to managing end-to-end IT service and its lifecycle and focusing on the business outcome. It's not just which technology we are supporting and what silos we might be in. We need to worry about what the outcome is on the business. The starting point should be the outcome, and everything we do should be designed to achieve what's wanted.

Cannon: In terms of trends like cloud, what you're seeing is a focus on exactly what it is that I get out of IT, as opposed to a focus from the business on the internal workings of IT.

... What things like cloud tend to do is to provide business with a way of relating to IT as a set of services, without needing to worry about what's going on underneath the surface. So, business is going to look for clear solutions that meet their needs and can change with their needs in very short times.

They still have to worry about managing the technology. These issues don't go away. It really is just a different way of dealing with the sourcing and resourcing of how you provide services.

... Businesses need to be able to react quickly and ... to be very flexible within a rapidly changing, volatile economy. So, business is going to look for clear solutions that meet their needs and can change with their needs in very short times.

Hanna: An issue that comes up quite a lot is that ITIL Version 3 appears to have gotten much bigger and more complex. Some people look at it and wonder where the old service delivery and service support areas have gone, and they've taken surprise by the size of V3 and the number of core books.

When Version 3 came out, it launched with a much bigger perspective right from the beginning. Instead of having just two things to focus on, there are five core books. I think that has made it look much bigger and more complex than Version 2.

It is true that if you go through education, you do need to get your head around the new service life-cycle concept and the concept called "business outcomes," as we've already mentioned. And, you need to have an appreciation of what's unique to the five core books. But, these changes are long awaited and they're very useful additions to ITIL, and complementary to what we've already learned before.

Rance: If you look at financial management in ITIL Version 3, it says you really have to understand the cost of supplying each service that you supply and you have to understand the value that each of those services delivers to your customers.

Now, that's a very simple concept. If you think of it in a broader context, you can't imagine, say, a car manufacturer who didn't know the cost of producing a car or the value of that car in the market. But, huge numbers of IT service providers really don't understand the cost of using its service and the value of that service in the market.

ITIL V3 very much focuses on that sort of idea -- really understanding what we are doing in terms of value and in terms of cost-effectiveness of that level, rather than that procedural level.

Cannon: Financial management really hasn't changed in the essence of what it is. Financial management is a set of very well defined disciplines. Within Version 3, the financial management questions become more strategic. How do we calculate value? How do we align the cost of a service with the actual outcome that the business is trying to achieve? How do we account for changing finances over time?

Rance: A lot of businesses are in the service business themselves. It might not be IT service, but many of the customers we're dealing with are in some kind of service business, whether it's a logistics business or a transport business. Even a retailer is in the service businesses, and they provide goods as well.

In order to run any kind of a service you need to have service management. You need to manage your incidents, problems, changes, finances, and all those other things. What I'm starting to see is that things that started within the IT organization -- incident management, problem management and change management -- some of my better customers are now starting to pick up within their business operations.

They're doing something very much like ITIL incident management, ITIL change management, or ITIL problem management within the business of delivering the service to their customers.

Hanna: If you're running yourself as a business, you need to understand the business or businesses you serve, and you need to behave in the same way.
Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript, or download the transcript. Learn more. Sponsor: Hewlett Packard.

Thursday, August 20, 2009

Compuware weighs in on portfolio management that rationalizes IT budgets in tough economy

Listen to the podcast. Download or read a full transcript. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Compuware.

The current economic downturn highlights how drastically businesses and their IT operations need to change -- whether through growth, reductions, or transformation (or all three).

As IT budgets react to such change, leaders need to better understand how to manage such change holistically, and not have change manage them (or worse).

One strong way to be on top of change is by employing IT portfolio management techniques, products, and processes. To learn more about helping enterprises better manage their IT costs and priorities while preparing for flexible growth when the economic tide turns, I recently interviewed Lori Ellsworth, vice president of Changepoint Solutions at Compuware, and David A. Kelly, senior analyst at Upside Research.

Here are some excerpts:
Kelly: It's really hard to improve, if you don't have a way to measure how you're doing, or a way to set goals for where you want to be. That's the idea behind IT portfolio management, as well as project portfolio management (PPM). ... [Leaders need to] take the same type of metrics and measurements that organizations have had in the financial area around their financial processes and try to apply that in the IT area and around the projects they have going on.

[IT portfolio management] measures the projects, as well as helps try to define a way to communicate between the business side of an organization that's setting the goals for what these projects or applications are going to be used for, and the IT side of the organization, which is trying to implement these. And, it makes sure that there are some metrics, measurements, and ways to correlate between the business and IT side.

Ellsworth: IT organizations now are moving toward acting in a more strategic role. Things are changing rapidly in the business environment, which means the organizations that they're serving need to change quickly and they are depending on, or insisting on, IT changing and being responsive with them.

It's essential that IT watch what's going on, participate in the business, and move quickly to respond to competitive opportunities or economic challenges. They need to understand everything that's under way in their organization to serve the business and what they have available to them in terms of resources -- and they need to be able to collaborate and interact with the business on a regular basis to adjust and make change and continue to serve the business.

If IT wants to engage in a conversation about moving investments, about stopping something they're working on so they can respond to a market opportunity, for example, they need to understand who are the people, what is the cost, and where can we make changes to respond to the business. ... This isn't about IT deciding on different projects they could work on and what benefit it might deliver to the business. The business is at the table, collaborating, looking at all the potential opportunities for investment, and reaching agreement as a business on what are the top priorities.

Kelly: The other thing that's needed is consistency. When you're making these kinds of decisions, for a lot of IT organizations and organizations in general, if times are good, you can make a lot of decisions in an ad hoc fashion and still be pretty successful.

But, in dynamic and more challenging economic times, you want the decisions that you or other people on the IT team, as well as the business, are making to be consistent. You want them to have some basis in reality and in an accepted process. You talked about metrics here and what kind of metrics can you provide to the Chief Operating Officer.

You need consistency in these dynamic times and also you need a way to collaborate.

Ellsworth: There are a couple of problems with manual processes. They're very labor-intensive. We've talked about responsiveness. We need information to drive decision-making. So, the moment we rely on individual efforts or on people who have to go out and sit through meetings and collect data, we're not getting data that we can necessarily trust. We're not getting data that is timely to your point and we're not able to make those decisions to be responsive.

You end up with a situation where very definitely your resources are busy and fully deployed, but they're not necessarily doing the right things that matter the most to the business. That data needs to be real-time, so that, at multiple levels in the organization, we can be constantly assessing the health and the alignment in terms of what IT is doing to deliver to the business, and we have the information to make a change.

Kelly: To me, it's analogous to what we saw maybe 10 years ago in software development, when a whole bunch of automated testing tools became available, and organizations started to put a lot of emphasis in that area.

As you're developing an application, you can certainly test it manually and have people sitting there testing it, but when you can automate those processes they become more consistent. They become thorough, and they become something that can be done automatically in the background.

We're seeing the same thing when it comes to managing IT applications and projects, and the whole situation that's going on in the IT area.

When you start looking at IT portfolio management, that provides the same kind of automation, controls, and structure by which you can not only increase the quality of the decisions that are being made, but you can also do it in a way that almost results in less overhead and less manual work from an organization.

... Areas such as legacy transformation or modernization are good for this, because you do have to make a lot of decisions ... where you need to gain consensus. [IT portfolio management] can certainly help deliver that return on investment (ROI) much faster.

Ellsworth: It's also an opportunity to reduce the total number of applications, and the follow-on is an approach to being more efficient or investing in the applications that are strategic to the business.

It sounds pretty basic, but the moment an organization starts to inventory all of the projects that are under way and all of the applications that are deployed in production serving the business, even just that simple exercise of putting them in a single view and maybe categorizing them very simply with one or two criteria, quite quickly allows organizations to identify those rogue projects that were under way.

... They will quickly learn, "We thought we had 100 applications, and we've now discovered there are 300." They'll also quickly identify those applications that no one is using. There is some opportunity to start pulling back the effort or the cost they're investing in those activities and either reducing the cost out of the business or reinvesting in something that's more important to the business.

... I'm also seeing an increased interest in participation, from a finance perspective, outside the IT organization. Often, the Chief Information Officer (CIO) and the executive in the finance area are working together.

The line of business executives -- the customers, if you will, the CIO -- are starting to be more mature, if I can use that expression in terms of their understanding of technology and of how they should be working with technology and driving that collaboration. So, there is some increased executive involvement even from outside IT, from the CIO's peers.

... IT needs to recognize that there are competitive alternatives, and certainly, if IT isn't delivering, the business will go and look elsewhere. In some simple examples, you can see line-of-business customers going out and engaging with a software-as-a-service (SaaS) solution in a particular area, because they can do that and bypass IT.

If they're not making the right decisions and doing the things that have the highest return to the business or if they are delivering poorly, it's really about missed opportunity and lower ROI.

Kelly: If you can do some application consolidation, you may be able to consider new deployment opportunities and cloud-based solutions. It will make the decision-making process within IT more nimble and more flexible, as well as enable them to respond more quickly to the line of business owners and be able to almost empower them with the right information and a structured decision-making process.
Listen to the podcast. Download or read a full transcript. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Compuware.