Monday, February 1, 2010

Technology, process and people must combine smoothly to achieve strategic virtualization benefits

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

For more information on virtualization and how it provides a foundation for private clouds, plan to attend the HP Cloud Virtual Conference in March. Register now for this event:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

The latest BriefingsDirect podcast discussion delves into proper planning and implementation of data-center virtualization to gain strategic-level advantage in enterprises.

Because companies generally begin their use of server virtualization at a tactical level, there is often a complex hurdle in expanding the use of virtualization. Analysts predict that virtualization will support upwards of half of server workloads in just a few years. Yet, we are already seeing gaps between an enterprise’s expectations and their ability to aggressively adopt virtualization without stumbling in some way.

These gaps can involve issues around people, process and technology and often, all three in some combination. Process refinement, proper methodological involvement, and swift problem management often provide proven risk reduction, and provide surefire ways of avoiding pitfalls as virtualization use moves to higher scale.

The goal becomes one of a lifecycle orchestration and governed management approach to virtualization efforts so that the business outcomes, as well as the desired IT efficiencies, are accomplished.

Areas that typically need to be part of any strategic virtualization drive include sufficient education, skilled acquisition, and training. Outsourcing, managed mixed sourcing, and consulting around implementation and operational management are also essential. Then, there are the usual needs around hardware, platforms and system as well as software, testing and integration.

So, we’re here with a panel of Hewlett Packard (HP) executives to examine in-depth the challenges of large scale successful virtualization adoption. We’ll look at how a supplier like HP can help fill the gaps that can hinder virtualization payoffs.

Please join me in welcoming our panel: Tom Clement, worldwide portfolio manager in HP Education Services; Bob Meyer, virtualization solutions lead with HP Enterprise Business; Dionne Morgan, worldwide marketing manager at HP Technology Services; Ortega Pittman, worldwide product marketing, HP Enterprise Services, and Ryan Reed, worldwide marketing manager at HP Enterprise Business. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excepts:
Meyer: The downturn has really forced anybody who is on the front to go headlong into virtualization. Today, we are technically ahead of where we were a year or two ago with the virtualization experience.

Everybody has significant amounts of virtualization in the production environment. They’ve been able to get a handle on what it can do to see what the real results and tangible benefits are. They can see, especially on the capital expenditure side, what it could do for the budgets and what benefits it can deliver.

Now, looking forward, people realize the benefits, and they are not looking in it just as an endpoint. They're looking down the road and saying, "Okay, this technology is foundational for cloud computing and some other things." Rather than slowing down, we’ll see those workloads increase.

They went from just single percentage points a year and a half ago to 12-15 percent now. Within two years, people are saying it should be about 50 percent. The technology has matured. People have a lot of experience with it. They like what they see in results, and, rather than slow down, it's bringing efficiency to things like the new services model.

Morgan: Many people have probably heard the term "virtual machine sprawl" or "VM sprawl," and that's one of the risks. Part of the reason VM sprawl occurs is because there are no clear defined processes in place to keep the virtualized environment under control.

Virtualization makes it so easy to deploy a new virtual machine or a new server, that if you don’t have the proper processes in place, you could have more and more of the these virtual machines being deployed and you lose control. You lose track of them.

That's why it's very important for our clients to think about ... how they're going to continue to manage virtualization on an on-going basis, so they keep it under control.

Pittman: Many, times small, medium, and large organizations have the virtualization needs, but might not have the skills on hand.

The skill demand and the instant ability to get started is something that we take a lot of pride in, and in the global track record of doing that very well is something that HP Enterprise Services can bring from an outsourcing perspective. That's where HP Enterprise Services comes to add value with meeting customers' needs around skills.

Clement: Our 30-plus years of experience in providing customer training has shown, time and time again, that technology investments by themselves don’t ensure success.

The business results that clients want in virtualization won’t be achieved until those three elements you just mentioned -- technology, process and people -- are all addressed and aligned.



The business results that clients want in virtualization won’t be achieved until those three elements you just mentioned -- technology, process and people -- are all addressed and aligned.

That's really where training comes in. Increasing the technical skills of our customers' people is often one of the most effective ways for them to grow, increase their productivity and boost the success rates of their virtualization initiatives.

In fact, an interesting study just last year from IDC found that 60 percent of the factors leading to the general success in the IT function are attributed to the skills of people involved. Our education team can help address both the people and process parts of the equation.

For more information on HP's Virtual Services, please go to: www.hp.com/go/virtualization and www.hp.com/go/services.

Reed: We see a shift in the way that IT organizations have considered what they think would be strategic to their end business function. A lot of that is driven through the analysis that goes into planning for a virtual server environment.

When doing something like a virtual server environment, the IT organizations have to take a step back and analyze whether or not this is something that they’ve got the core competency to support. Often times, they come to the conclusion that they don’t have the right set of skills, resources, or locations to support those virtual servers in terms of their data-center location, as well as where those resources are sitting.

So, during the planning of virtual server environments, IT organizations will choose to outsource the planning, the implementation, and the ongoing management of that IT infrastructure to companies like HP.

It's definitely a good opportunity for IT organizations to take a step back and look at how they want to have that IT infrastructure managed, and often times outsourcing is a part of that conversation.

Meyer: One thing virtualization does very nicely is blur the connections between the various pieces of infrastructure, and the technology has developed quite a bit to allow that to ebb and flow with the business needs.

And, you're right. The other side of that is getting the people to actually work and plan together. We always talk about virtualization as not an end-point. It's an enabler of technology to get you there.

If you put what we’re talking about in context, the next thing that people want to go to is maybe build a private-cloud service delivery model. Those types of things will depend on that cooperation. It's not just virtualization that that's causing but it's really the newest service delivery models. Where people are heading with their services absolutely requires management and a look at new processes as well.

Pittman: We’d like to work with our customers to understand that it's a starting point to consolidate, but there is a lot more in the broader ecosystem consider, as they think about optimizing their IT environment.

One of HP’s philosophies is the whole concept of converged infrastructure. That's thinking about the infrastructure more holistically and addressing the applications, as you said, as well as your server environments and not doing one off, but looking more holistically to get the full benefit.

Moving forward, that's something that we certainly could help customers do from an outsourcing standpoint in enabling all of the parts, so there aren’t gaps that cause bigger problems than the one hiccup that started the whole notion of virtualization in the beginning.

Morgan: We think about this in terms of their life cycle. We like to start with a strategy discussion, where we have consultants sit down with the client to better understand what they’re trying to accomplish from a business objective perspective. We want to make sure that the customers are thinking about this first from the business perspective. What are their goals? What are they trying to accomplish? And, how can virtualization help them accomplish those goals?

Then, we also can help them with their actual return on investment (ROI) analysis and we have ROI tools that we can use to help them develop that analysis. We have experts to help them with the business justification. We try to take it from a business approach first and then design the right virtualization solution to help them accomplish those goals.

Pittman: HP Enterprise Services worked with the Navy/Marine Corps Intranet (NMCI), which is the world’s largest private network, serving and supporting sailors, marines, and civilians in more than 620 locations worldwide.

They were experiencing business challenges in productivity and innovation and in the security areas. Our approach was to consolidate 2,700 physical servers down to 300, reducing outage minutes by almost half. This decreased NMCI’s IT footprint by almost 40 percent and cut carbon emissions by almost 7,000 tons.

We minimized their downtime and controlled cost. We accelerated transfer times, transparency and optimal performance.



Virtualizing the servers in this environment enabled them to eliminate carbon emissions equivalent to taking 3,600 cars off the road for one year. So, there were tremendous improvements in that area. We minimized their downtime and controlled cost. We accelerated transfer times, transparency and optimal performance.

All of this was done through the outsourcing virtualization support of HP Enterprise Services and we're really proud that that had a huge impact. They were recognized for an award, as a result of this virtualization improvement, which was pretty outstanding. We talked a little earlier about the broader benefits that customers can expect, the services that help make all of this happen.

In our full portfolio within the IT organization of HP, that would be server management services, data center modernization, network application services, storage services, web hosting services, and network management services. All combined, they made this happen successfully. We're really proud of that, and that's an example of the very large-scale impact that's reaping a lot of benefit.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

For more information on virtualization and how it provides a foundation for private clouds, plan to attend the HP Cloud Virtual Conference in March. Register now for this event:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

You may also be interested in:

Business event processing and SOA: Joined at the hip

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg, edited by Ronald Schmelzer

At the dawn of the computer age, forecasters predicted all manner of changes in day-to-day life, including fully automated kitchens, cars that drove themselves, paperless offices, and more. In all of these now-quaint views of the future, the prognostication focused on applying the technologies of the future to the problems of the present. Where an office had a filing cabinet full of paper, the future will put that paper in a computer, and voila: The paperless office!

The reality of the last 50 years, of course, is quite different. As technology evolved, so too did our appetite for information. Where the size of the filing cabinet in a 1950s office constrained the quantity of information a business could manage, today we have no such limitations. But rather than stopping at a simplification of existing business, we continue to push the limits of the technology at our disposal. If we have terabytes of storage, then to be sure we’ll soon have terabytes of information to fill it. If we have networks running at gigabit speeds, then you can rest assured the quantity of information we’ll attempt to push through such pipes will soon consume whatever capacity we have.

In many ways, in fact, it is the quantity and variety of information on the move, rather than simply at rest, that defines the modern business world. Today’s businesses operate in a complex ecosystem of connected, interrelated events, where each event creates information that flies around our networks. From fluctuating interest rates to customer transactions to manufacturing processes, business events drive the business while simultaneously causing the creation and movement of information.

In particular, it is the tight interrelationship between business events—occurrences in the real world that are relevant to the business, and software events, which are messages that such events generate, that creates both a problem as well as an opportunity for businesses. The sheer quantity and variety of events promises to swamp any organization unprepared for the onslaught of network traffic that today’s business generates. But on the other hand, business events are also the lifeblood of the organization, as everything the business does appears in real time in the event traffic on their network. Sometimes the patterns such events exhibit are easy to identify, but more often they are hard to detect and correlate. Organizations need to process and leverage business events to provide insight into the workings of their organization in order to run the business and empower the people within it.

What is business event processing?

The key to leveraging business events is to apply software that processes software events—such as messages on the network—in such a way as to gain insight into, and control over the business events that generate them. Such software is known as Business Event Processing (BEP). BEP software helps businesses detect, analyze, and respond to complex events to take advantage of emerging opportunities, handle unexpected exceptions, and redirect resources as necessary—essentially, dealing with business events on the business level, independent of the technology context for those events. BEP software often forms part of a Business Process Management (BPM) solution, which combines event pattern detection with dynamic process execution.

The goal of BEP is to detect and interpret business situations, resulting in effective business decision making. BEP enables organizations to extract events from multiple sources, detect business situations based on patterns of events, and then derive new events through aggregation of events as well as by adding new information. BEP software helps companies identify patterns and establish connections between events, and then initiates a new event, or a trigger, when an important trend emerges.

BEP is becoming increasingly important across the business environment because it enables a wide variety of organizations to proactively analyze and respond to small market changes that can have significant business impact. BEP also has a variety of other uses, for example:
  • Retailers’ BEP solutions proactively alert them about the success or failure of a product as goods move off the shelf, allowing them to make real time changes to pricing, inventory, and marketing campaigns

  • E-commerce vendors leverage BEP to help identify fraud and reduce abandoned shopping carts

  • Trading markets use BEP to uncover and compare minute changes throughout global markets to support buy/sell decisions as well as to ensure the timely execution of bids

  • The massive multi-player online game industry uses BEP to uncover unauthorized activities among tens of thousands of actions per second

  • Fleet management companies leverage BEP to help them make instantaneous decisions on how to deal with products that are lost in transit or delayed due to unforeseen circumstances.
Business event processing in an SOA context

BEP describes a wide range of ways that enterprises approach events, from simple to complex. Opening an account, making a withdrawal, buying an item, changes in sensor or meter readings, or sending an invoice are all examples of common business events. Regardless of the potential complexity of such events, organizations must both recognize new events and understand the importance of business critical events in a noisy environment. Only by recognizing important events in real time will such companies be able to leverage their IT systems and business processes to speed response and reduce the need for manual processing.

And for every type of application, there is potentially a new type of event.



Events, however, do not exist in a vacuum—they depend upon various applications and systems across the IT infrastructure to create and consume them. And for every type of application, there is potentially a new type of event. The BEP challenge, therefore, is dealing with environments of broad heterogeneity, separating what’s important to the business from the underlying complexity of the technology.

In other words, BEP does not stand alone in the IT organization. It requires a flexible architecture that can abstract the underlying heterogeneity the IT environment presents. Today’s enterprises are implementing Service-Oriented Architecture (SOA) for this purpose. SOA is a set of best practices for organizing IT resources in a flexible way to support agile business processes by representing IT capabilities and information as Services, which abstract the complexity of the underlying technology from the business. Businesses then define events and their responses through the IT perspective of interacting with Services.

Applying SOA to business events: Heterogeneity and flexibility

The story of how to apply SOA to business events take place simultaneously on two levels: above and below the Service abstraction. Above this abstraction is the business environment, where the business is able to leverage BEP to glean real time information about the business independent of the underlying technology. In contrast, below the Service abstraction, events are messages moving from one Service endpoint to another, typically (but not necessarily) in XML format.

It is below the Service abstraction, in fact, that applying SOA to business events provides much of its value to the organization. Service interfaces, by their nature, send and/or receive messages, so the broader the SOA implementation, the more the message traffic between Service endpoints represents the operations of the business. From the BEP perspective, however, such messages are events, and provide visibility into the business in an ad hoc, real time manner.

. . . If managers have visibility into business events, then they can then take more effective, proactive steps to optimize production and reduce costly slowdowns.



While the SOA-enabled BEP story offers business value beneath the Service abstraction, the benefits to the organization above the abstraction are every bit as important. After all, as the pace of business accelerates, there are business benefits from optimizing how the organization handles business events. Improved customer responsiveness, more optimal usage of physical assets and better management of complex value chains all benefit from improvements in event processing. Furthermore, if managers have visibility into business events, then they can then take more effective, proactive steps to optimize production and reduce costly slowdowns.

Similarly, event processing can improve customer service and increase customer satisfaction. Because event processing can identify important events and deliver the right information to the right place at the right time, managers can mitigate or avoid a wide range of problems. Such benefits accrue not only in individual instances, but also across business processes, as well. Visibility into events helps line of business managers deal with changes in business process, thus making the business more reactive.

Combined with SOA and BPM, therefore, BEP extends the value of each as well as the synergies between them. Following SOA best practices can leverage the value of both BPM and BEP, as SOA hides the complexity of the IT environment from the business aspects of the solutions. The bottom line is that BPM, SOA and BEP combine to meet the needs of the business more effectively than any one or two of the approaches can separately.

The ZapThink take

The exponential growth of information in the business world continues unabated, and there’s no reason to expect it to slack off in the future. This growth is driving the need for event processing, as well as the enabling technologies of Web 2.0 and the underlying architecture of SOA. The combination of these three approaches provides a foundation for flexibility, composability, integration, and scalability. At the heart of this synergy are open standards, which facilitate all the various interactions among systems that goes into Business Event Processing. Furthermore, existing security, governance, and BPM technologies round out the set of enabling technologies that feed this confluence of approaches.

The bottom line, however, is the business story. BEP, combined with SOA, further bridges the gap between business and IT.



The bottom line, however, is the business story. BEP, combined with SOA, further bridges the gap between business and IT. Only the business knows the relevance of business events, SOA abstracts the underlying technology, and Web 2.0 provides an empowering interface to increasingly powerful, real time capabilities and information.

The challenge with discussing this synergy among BEP, SOA, and Web 2.0 is that no one term does it justice. SOA is a critical part of this story, but is only a part. SOA delivers a set of principles for organizing an organization’s resources to provide a business-centric abstraction, because the business doesn’t care what server, network, or data center the implementation underlying a Service runs on. All they care is that the Service works as advertised.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Saturday, January 30, 2010

Time to give server virtualization's twin, storage virtualization, a top place at IT efficiency table

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

The latest BriefingsDirect podcast discussion hones in on storage virtualization. You've heard a lot about server virtualization over the past few years, and many enterprises have adopted virtual servers to improve their ability to manage runtime workloads and high utilization rates to cut total cost.

But, as a sibling to server virtualization, storage virtualization has some strong benefits of its own, not the least of which is the ability to better support server virtualization and make it more successful.

We'll look at how storage virtualization works, where it fits in, and why it makes a lot of sense. The cost savings metrics alone caught me by surprise, making me question why we haven't been talking about storage and server virtualization efforts in the same breath over these past several years.

Here to help understand how to better take advantage of storage virtualization, we're joined by Mike Koponen, HP's StorageWorks Worldwide Solutions marketing manager. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Koponen: Storage requirements aren’t letting up from regulatory requirements, expansion, 24x7 business environments, and the explosion of multimedia. Storage growth is certainly not stopping due to a slowed down economy.

So enterprises need to boost efficiencies from their existing assets as well as the future assets they're going to acquire and then to look for ways to cut capital and operating expenditures. That's really where storage virtualization fits in.

We found that in a lot of businesses they may have as little as 20 percent utilization of their storage capacity. By going to storage virtualization, they can have a 300 percent increase in that existing storage asset utilization, depending upon how it's implemented.

So storage virtualization is a way to increase asset utilization. It's also a way to save on administrative cost, and it's also a way to improve operational efficiencies, as businesses deal with the increasing storage requirements of their businesses. In fact, if businesses don't reevaluate their storage infrastructures at the same time as they're reevaluating their server infrastructures, they really won't realize the full potential of a server virtualization.

In the past, customers would just continue to deploy servers with direct-attached storage (DAS). All of a sudden, they ended up with silos or islands of storage that were more complex to manage and didn't have the agility that you would need to shift storage resources around from application to application.

Then, people moved into deploying network storage or shared storage, storage area networks (SANs) or network-attached storage (NAS) systems and realized a gain in efficiency from that. But, the same can happen. You can end up with islands of SAN systems or NAS systems. Then, to bump things up to the next level of asset utilization, network storage virtualization comes into play.

Now, you can pool all those heterogeneous systems under one common management environment to make it easy to manage and provision these islands of storage that you wound up with.

Studies show swift pay-back

A recent white paper recently done by IDC focuses on the business value of storage virtualization. It looked at a number of factors -- reduced IT labor, reduced hardware and software cost, reduced infrastructure cost, and user productivity improvements. Virtualized storage had a range of payback anywhere from four to six months, based on the type of virtualized storage that was being deployed.

There are different needs or requirements that drive the use of storage virtualization and also different benefits. It may be flexible allocation of tiered storage, so you can move data to different tiers of storage based upon its importance and upon how fast you want to access it. You can take less business-critical information that you need to access less frequently and put it on lower cost storage.

The other might be that you just need more efficient snap-shotting, a replication of things, to provide the right degree of data protection to your business. It's a function of understanding what the top business needs are and then finding the right type of storage virtualization that matches those.

In order to take advantage of the advanced capabilities of server virtualization, such as being able to do live migration of virtual machines and to put in place high availability infrastructures, advanced server virtualization require some form of shared storage.

So, in some sense, it's a base requirement that you need shared storage. But, what we've experienced is that, when you do server virtualization, it places some unique requirements on your storage infrastructure in terms of high availability and performance loads.

Server virtualization drives the creation of more data from the standpoint of more snapshots, more replicas, and things like that. So, you can quickly consume a lot of storage, if you don't have an efficient storage management scheme in place.



Server virtualization drives the creation of more data from the standpoint of more snapshots, more replicas, and things like that. So, you can quickly consume a lot of storage, if you don't have an efficient storage management scheme in place.

And, there's manageability too. Virtual server environments are extremely flexible. It's much easier to deploy new applications. You need a storage infrastructure that is equally as easy to manage, so that you can provision new storage just as quickly as you can provision new servers.

As a result, you certainly get an increased degree of data protection by being able to meet backup windows and not having to compromise the amount of information you back up, because you're trying to squeeze more backups through a limited number of physical servers. When you do server virtualization, you're reducing the number of physical servers and running more virtual ones on top of that reduced number.

You might be trying to move same number of backups through a fewer number of physical servers. You also then end up with this higher degree of data protection, because with a virtualized server storage environment you can still achieve the volume of backups you need in a shorter window.

From an HP portfolio standpoint, we have some innovative products like the HP LeftHand SAN system that's based on a clustered storage architecture, where data is striped across the arrays and the cluster. If a single array goes down in the cluster, the volume is still online and available to your virtual server environment, so that high degree of application availability is maintained.

For people who want to learn more about storage virtualization and what HP has to offer to improve their business returns, I suggest, they go to www.hp.com/go/storagevirtualization. There they can learn about the different types of storage virtualization technologies available. There are also some assets on that website to help them with the justification of putting storage virtualization within their companies.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Friday, January 29, 2010

Security skills provide top draw across still challenging U.S. IT jobs landscape

Listen to the podcast. Read a full transcript or download a copy. Find it on iTunes/iPod and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Gain additional data and analysis from Foote Partners on the IT jobs market.

The latest BriefingsDirect Analyst Insights Edition, Volume 48, centers on the IT job landscape for 2010. We interview David Foote, CEO and chief research officer, as well as co-founder, at Foote Partners LLC of Vero Beach, Fla.

David closely tracks the hiring and human resources trends across the IT landscape. He'll share his findings of where the recession has taken IT hiring and where the recovery will shape up. We'll also look at what skills are going to be in demand and which ones are not. David will help those in IT, or those seeking to enter IT, identify where the new job opportunities lie.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS business process management system, and through the support of TIBCO Software. I'm your host and moderator Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
I co-founded this company with a former senior partner at McKinsey. We developed a number of products and took them out in 1997. We not only have that big IT executive and trends focus as analysts, but also very much a business focus.

We've also populated this company with people from the HR industry, because one of the products we are best known for is the tracking of pay and demand for IT salaries and skills.

We have a proprietary database -- which I'll be drawing from today -- of about 2,000 companies in the U.S. and Canada. It covers about 95,000 IT workers. We use this base to monitor trends and to collect information about compensation and attitudes and what executives are thinking about as they manage IT departments.

For many years, IT people were basically people with deep technical skills in a lot of areas of infrastructure, systems, network, and communications. Then, the Internet happened.

All of a sudden, huge chunks of the budget in IT moved into lines of business. That opened the door for a lot of IT talent that wasn't simply defined as technical, but also customer facing and with knowledge of the business, the industry, and solutions. We've been seeing a maturation of that all along.

What's happened in the last three years is that, when we talk about workforce issues and trends, the currency in IT is much more skills versus jobs, and part of what's inched that along has been outsourcing.

If you need to get something done, you can certainly purchase that and hire people full-time or you can rent it by going anywhere in the world, Vietnam, Southeast Asia, India, or many other places. Essentially, you are just purchasing a market basket of skills. Or, these days, you can give it over to somebody, and by that I mean managed services, which is the new form of what has been traditionally called outsourcing.

It's not so much about hiring, but about how we determine what skills we need, how we find those, and how we execute. What's really happened in two or three years is that the speed at which decisions are made and then implemented has gotten to the point where you have to make decisions in a matter of days and weeks, and not months.

Resisting the temptation

There have been some interesting behaviors during this recession that I haven't seen in prior recessions. That lead me to believe that people have really resisted the temptation to reduce cost at the expense of what the organization will look like in 2011 or 2012, when we are past this recession and are back into business as usual.

People have learned something. That's been a big difference in the last three years. ... Unemployment in IT is usually half of what it is in the general job market, if you look at Bureau of Labor Statistics (BLS) numbers. I can tell you right now that jobs, in terms of unemployment in IT, have really stabilized.

In the last three months [of 2009] there was a net gain of 11,200 jobs in these five [IT] categories. If you look at the previous eight months, prior to September, there was a loss of 31,000 jobs.


So going into 2010, the services industry will absolutely be looking for talent. There's going to be probably a greater need for consultants, and companies looking for help in a lot of the execution. That's because there are still a lot of hiring restrictions out there right now. Companies simply cannot go to the market to find bodies, even if they wanted to.

Companies are still very nervous about hiring, or to put it this way, investing in full-time talent, when the overhead on a full-time worker is usually 80-100 percent of their salaries. If they can find that talent somewhere else, they are going to hire it.

There are certain areas, for example, like security, where there is a tendency to not want to hire talent outside, because this is too important to a company. There are certain legacy skills that are important, but in terms of things like security, a lot of the managed services that have been purchased in 2009 were small- to medium-sized companies that simply don't have big IT staffs.

If you have 5,000, 6,000, or 7,000 people working in IT, you're probably going to do a lot of your own security, but small and medium size have not, and that's an extremely hot area right now to be working in.

We track the value of skills and premium pay for skills, and the only segment of IT that has actually gained value, since the recession started in 2007, is security, and it has been progressive. We haven't seen a downturn in its value in one quarter.

High demand for security certification

Since 2007, when this recession started, overall the market value of security certs is up 3 percent. But if you look at all 200 certified skills that we track in this survey that we do of 406 skills, overall skills have dropped about 6.5 percent in value, but security certifications are up 2.9.

It is a tremendous place to be right now. We've asked people exactly what skills they're hiring, and they have given us this list: forensics, identity and access management, intrusion detection and prevention systems, disk file-level encryption solutions, including removable media, data leakage prevention, biometrics, web content filters, VoIP security, some application security, particularly in small to medium sized companies (SMBs), and governance, compliance, and audit, of course.

The public sector has been on a real tear. As you do, we get a lot of privileged information. One of the things that we have heard from a number of sources, I can't tell you the reason why, is that a lot of recruiting is happening in the private sector right now with the National Security Agency and Homeland Security -- in-the-trenches people.

I think there was a feeling that there weren't enough real deep technical, in-the-trenches kind of talent, in security. There were a lot of policy people, but not enough actual talent. Because of the Cyber Security Initiative, particularly under the current administration, there has been a lot of hiring.

Managed services looks like one of the hottest areas right now, especially in networking and communication: Metro Ethernet, VPNs, IP voice, and wireless security. And if you look at the wireless security market right now, it’s a $9 billion market in Europe. It’s a $5.7 billion market in Asia-Pacific. But in North America it’s between $4 and 5 billion.

There's a lot of activity in wireless security. We have to go right down into every one of these segments. I could give you an idea of where the growth is spurting right now. North America is not leading a lot of this. Other parts of the world are leading this, which gives our companies opportunities to play in those markets as well.

For many years, as you know, Dana, it was everybody taking on America, but now America is taking on the rest of the world. They're looking at opportunities abroad, and that’s had a bigger impact on labor as well. If you're building products and forming alliances and partnerships with companies abroad, you're using their talent and you're using your talent in their countries. There is this global labor arbitrage, global workforce, that companies have right now, and not just the North American workforce.
Listen to the podcast. Read a full transcript or download a copy. Find it on iTunes/iPod and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Gain additional data and analysis from Foote Partners on the IT jobs market.

Apple and Oracle on way to do what IBM and Microsoft could not: Dominate entire markets

I was a bit distracted from the Apple iPad news due to the marathon Oracle conference Wednesday on its shiny new Sun Microsystems acquisition.

But the more I thought about it, the more these two companies are extremely well positioned to actually fulfill what other powerful companies tried to do and failed. Apple and Oracle may be unstoppable in their burgeoning power to dominate the collection of profits across vast and essential markets for decades.

Apple is well on the way to dominating the way that multimedia content is priced and distributed, perhaps unlike any company since Hearst in its 1920s heyday. Apple is not killing the old to usher in the new, as Google is. Apple is rescuing the old media models with a viable online direct payment model. Then it will take all the real dough.

The iPad is a red herring, almost certainly a loss leader, like Apple TV. The real business is brokering a critical mass of music, spoken word, movies, TV, books, magazines, and newspapers. All the digital content that's fit to access. The iPad simply helps convince the producers and consumers to take the iTunes and App Store model into the domain of the formerly printed word. It should work, too.

Oracle is off to becoming the one-stop shop for mission-critical enterprise IT ... as a service. IT can come as an Oracle-provided service, from soup to nuts, applications to silicon. The "service" is that you only need go to Oracle, and that the stuff actually works well. Just leave the driving to Oracle. It should work, too.

This is a mighty attractive bid right now to a lot of corporations. The in-house suppliers of raw compute infrastructure resources are caught in a huge, decades-in-the-making vice -- of needing to cut costs, manage energy, reduce risk and back off of complexity. Can't do that under the status quo.

In doing complete IT package gig, Oracle has signaled the end of the best-of-breed, heterogeneous, and perhaps open source components era of IT. In the new IT era, services are king. The way you actually serve or acquire them is far less of a concern. Enterprises focus on the business and the IT comes, well, like electricity.

This is why "cloud" makes no sense to Oracle's CEO Larry Ellison. He'd rather we take out the word "cloud" from cloud computing and replace it with "Oracle." Now that makes sense!

All the necessary ingredients

Oracle has all the major parts and smarts it needs to do this, by the way. Oracle may need an acquisition or two more for better management and perhaps hosting. But that's about it.

Like Apple, Oracle is not killing the old IT era to usher in the new. Oracle is rescuing the old IT models with a viable complete IT acquisition model. Then it too will take all the real dough.

Incidentally, IBM tired to, and came quite close to a similar variety of enterprise IT domination. That was more than 30 years ago. IBM was an era or two too early. Microsoft tried, and came moderately close -- at least in vision -- to the same thing, moving from the desktop backward into the data center. But, alas, Microsoft was also an era too early.

Both Sun and IBM were seduced over the past 15 years by the interchangeable parts version of IT ... It's what Java is all about. Microsoft hated Java, never veered from their all-us-or-nothing mantle, which is now passing to Oracle. But Microsoft never had the heft in the core enterprise data center to pull it off. Oracle does.

Yes, Apple and Oracle have clearly learned well from their brethren. And the timing has never been better, the recession a god-send.

So now as consumers, we have some big choices .... er, actually maybe we have a big buy-in, yes, but maybe not too much in the way of choices. As any mainstream consumer and producer of media, I will really need to do business with Apple. Not too much choice. Convenience across the content supply chain has become the killer app. And I love it all the way.

I want my MTV, my New York Times, my Mahler and my Madmen. Apple gets it to me as I wish at an acceptable price. Case closed. The end device is not so important any more, be it big, medium or small, be it Mac or PC. Because of my full-bore consumer seduction, the producers of the content need to follow the gold Apple ring. Same for consumer applications and games, though they are all fundamentally content.

As an IT services buyer, Oracle is making a similar offer. Convenience is killer for IT managers too. Oracle, through its appliances, integrated stack, data ecosystem, tuned high-end hardware, business applications, business intelligence, and sales account heft, leaves me breathless. And taking a next breath will probably have an Oracle SLA attached. Whew!

Critical mass in the accounts that matter

Oracle is already irreplaceable in all -- and I mean all -- the major enterprise accounts. Oracle can substantially now reduce complexity across the IT infrastructure front, while seemingly cutting costs, apparently reducing risk. But a huge portion of the total savings goes into Oracle's pockets, making it stronger in more ways in more accounts for 20 years. Now they can take the lion's share of the profits in the IT as a service era. I call that dominance.

So let's hear it for the balancing acts still standing. Go IBM! Go Microsoft! Go Google! Go HP! Go SAP! How about Cisco and EMC? You all go for as long as you can, please. Or at least as long as it takes for the next IT and media eras to arrive. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

These handful of companies are about the only insurance policies against Apple and Oracle being able to price with impunity across vast markets that deeply affect us all.

Wednesday, January 27, 2010

Oracle's Sun Java strategy: Business as usual

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

In an otherwise pretty packed news day, we’d like to echo @mdl4’s sentiments about the respective importance of Apple’s and Oracle’s announcements: “Oracle finalized its purchase of Sun. Best thing to happen to Sun since Java. Also: I don’t give a sh#t about the iPad. I said it.”

There’s little new in observing that on the platform side, that Oracle’s acquisition of Sun is a means for turning the clock back to the days of turnkey systems in a post-appliance era. History truly has come full circle as Oracle in its original database incarnation was one of the prime forces that helped decouple software from hardware.

Fast forward to the present, and customers are tired of complexity and just want things that work. Actually, that idea was responsible for the emergence of specialized appliances over the past decade for performing tasks ranging from SSL encryption/decryption to XML processing, firewalls, email, or specialized web databases.

The implication here is that the concept is elevated to enterprise level; instead of a specialized appliance, it’s your core instance of Oracle databases, middleware, or applications. And even there, it’s but a logical step forward from Oracle’s past practice of certifying specific configurations of its database on Sun (Sun was, and now has become again, Oracle’s reference development platform).

That’s in essence the argument for Oracle to latch onto a processor architecture that is overmatched in investment by Intel for the x86 line. The argument could be raised than in an era of growing interest in cloud, as to whether Oracle is fighting the last war. That would be the case – except for the certainty that your data center has just as much chance of dying as your mainframe.

Question of second source

At the end of the day, it’s inevitably a question of second source. Dana Gardner opines that Oracle will replace Microsoft as the hedge to IBM. Gordon Haff contends that alternate platform sources are balkanizing as Cisco/EMC/VMware butts their virtualized x86 head into the picture and customers look to private clouds the way they once idealized grids.

The highlight for us was what happens to Sun’s Java portfolio, and as it turns out, the results are not far from what we anticipated last spring: Oracle’s products remain the flagship offerings. From looking at respective market shares, it would be pretty crazy for Oracle to have done otherwise.

The general theme was that – yes – Sun’s portfolio will remain the “reference” technologies for the JCP standards, but that these are really only toys that developers should play with. When they get serious, they’re going to keep using WebLogic, not Glassfish. Ditto for:

• Java software development. You can play around with NetBeans, which Oracle’s middleware chief Thomas Kurian characterized as a “lightweight development environment,” but again, if you really want to develop enterprise-ready apps for the Oracle platform, you will still use JDeveloper, which of course is written for Oracle’s umbrella ADF framework that underlies its database, middleware, and applications offerings. That’s identical to Oracle’s existing posture with the old (mostly) BEA portfolio of Eclipse developer tools. Actually, the only thing that surprised us was that Oracle didn’t simply take NetBeans and set it free – as in donating it to Apache or some more obscure open source body.

The more relevant question for MySQL is whether Oracle will fork development to favor Solaris on SPARC. This being open source, there would be nothing stopping the community from taking the law into its own hands.



• SOA, where Oracle’s SOA Suite remains front and center while Sun’s offerings go on maintenance.

We’re also not surprised as tot he prominent role of JavaFX in Oracle’s RIA plans; it fills a vacumm created when Oracle tgerminated BEA’s former arrangement to bundle Adobe Flash/Flex development tooling. Inactualityy, Oracle has become RIA agnosatic, as ADF could support any of the frameworks for client display, but JavaFX provides a technology that Oracle can call its own.

There were some interesting distinctions with identity management and access, where Sun inherited some formidable technologies that, believe it or not, originated with Netscape. Oracle Identity management will grab some provisioning technology from the Sun stack, but otherwise Oracle’s suite will remain the core attraction. But Sun’s identity and access management won’t be put out to pasture, as it will be promoted for midsized web installations.

There are much bigger pieces to Oracle’s announcements, but we’ll finish with what becomes of MySQL. In shirt there’s nothing surprising to the announcement that MySQL will be maintained in a separate open source business unit – the EU would not have allowed otherwise. But we’ve never bought into the story that Oracle would kill MySQL. Both databases aim at different markets. Just about the only difference that Oracle’s ownership of MySQL makes – besides reuniting it under the same corporate umbrella as the InnoDB data store – is that, well, like yeah, MySQL won’t morph into an enterprise database. Then again, even if MySQL had remained independent, that arguably it was never going to evolve to the same class of Oracle as the product would lose its believed simplicity.

The more relevant question for MySQL is whether Oracle will fork development to favor Solaris on SPARC. This being open source, there would be nothing stopping the community from taking the law into its own hands.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Friday, January 22, 2010

The Christmas Day bomber, Moore’s Law, and enterprise IT's new challenges

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

Amid the posturing and recriminations following this past December’s ill-fated terrorist attack by the alleged Nigerian Christmas bomber, the underlying cause of the intelligence breach has gone all but unnoticed.

How is it the global post-9/11 anti-terrorist machine could miss a lone Nigerian with explosives in his underwear? After all, chatter included reference to “the Nigerian,” his own father gave warning, he was on a terrorist watch list, and he purchased a one-way ticket to Detroit, paid cash, and checked no luggage. You’d think any one of these bits of information would set off alarms, and the fact that the intelligence community missed the lot is a sign of sheer incompetence, right?

Not so fast. Such a conclusion is actually fallacious. The missing piece of the puzzle is the fact that there are hundreds of thousands of monthly air travelers, and millions of weekly messages that constitutes he chatter the intelligence community routinely follows. And that watch list? Hundreds of thousands of names, to be sure.

Furthermore, the quantity of information that agents must follow is increasing at an exponential rate. So, while it seems in retrospect that agents missed a huge red flag, in actuality there is so much noise that even the combination of warnings taken together was lost in a sea of noise. A dozen red flags, yes, but could you discern a dozen red grains of sand on a beach?

The true reason behind the intelligence breach is far more subtle than simple incompetence, and furthermore, the solution is just as difficult to discern. The most interesting part of this discussion from ZapThink’s perspective, naturally, is the implication for enterprise IT.

The global intelligence community is but one enterprise among many dealing with exponentially increasing quantities and complexity of information. All other enterprises, in the private as well as public sector, face similar challenges: As Moore’s Law and its corollaries proceed on their inexorable path, what happens when the human ability to deal with the resulting information overload falls short? How can you help your organization keep from getting lost in the noise?

The governance crisis point

Strictly speaking, Moore’s Law states that the number of transistors that current technology can cram onto a chip of a given size will increase exponentially over time. But the transistors on a chip are really only the tip of the iceberg; along with processing power we have exponential growth in hard drive capacity, network speed, and other related measures – what we’re calling corollaries to Moore’s Law. And of course, there’s also the all-important corollary to Murphy’s Law that states that the quantity of information available will naturally expand to fill all available space.

Anybody who remembers the wheat and chessboard problem knows that this explosion of information will lead to problems down the road. IT vendors, of course, have long seen this trend as a huge opportunity, and have risen to the occasion with tools to help organizations manage the burgeoning quantity of information. What vendors cannot do, however, is improve how people deal with this problem.

Fundamentally, human capabilities at best grow linearly. Our brains, after all, are not subject to Moore’s Law, and even so, enterprises depend far more on the interactions among people than on the contributions of individuals taken separately. While the number of transistors may double every 18 months, our management, analysis, and other communication skills will only see gradual improvements at best.

This disconnect leads to what ZapThink calls the governance crisis point, as illustrated in the figure below.

The governance crisis point

The diagram above illustrates the fact that while the quantity and complexity of information in any enterprise grows exponentially, the human ability to deal with that information at best grows linearly. No matter where you put the two curves, eventually the one overtakes the other at the governance crisis point, leading to the “governance crisis point problem”: Eventually, human activities are unable to deal with the quantity and complexity of information.

Unfortunately, no technology can solve this problem, because technology only affects the exponential curve. I’m sure today’s intelligence agents have state-of-the-art analysis tools, since after all, if they don’t have them, then who does? But the bomber was still able to get on the plane.

Furthermore, neither is the solution to this problem a purely human one. We’d clearly be fooling ourselves to think that if only we worked harder or smarter, we might be able to keep up. Equally foolish would be the assumption we might be able to slow down the exponential growth of information. Like it or not, this curve is an inexorable juggernaut.

SOA to the rescue?

Seeing as this article is from ZapThink, you might think that service-oriented architecture (SOA) is the answer to this problem. In fact, SOA plays a support role, but the core of the solution centers on governance, hence the name of the crisis point. Anyone who’s been through our Licensed ZapThink Architect course or our SOA & Cloud Governance course understands that the relationship between SOA and governance is a complex one, as SOA depends upon governance but also enables governance for the organization at large.

Just so with the governance crisis point problem: Neither technology nor human change will solve the problem, but a better approach to formalizing the interactions between people and technology give us a path to the solution. The starting point is to understand that governance involves creating, communicating, and enforcing policies that are important to an organization, and that those policies may be anywhere on a spectrum from human-centric to technology-centric. In the context of SOA, then, the first step is to represent certain policies as metadata, and incorporate those metadata in the organization’s governance framework.

In practice, the governance team sorts the policies within scope of the current project into those policies that are best handled by human interactions and those policies that lend themselves to automation. Representing the latter set of policies as metadata enables the SOA governance infrastructure to automate policy enforcement as well as other policy-based processes. Such policy representations alone, however. cannot solve the governance crisis point problem.

The answer lies in how the governance team deals with policies, in other words, what are their polices regarding policies, or what ZapThink likes to call metapolicies. Working through the organization’s policies for dealing with governance, and automating those policies, gives the organization a “metapolicy feedback loop” approach to leveraging the power of technology to improve governance overall.

Catching terrorists and other IT management challenges

How this metapolicy feedback loop might help intelligence agents catch the next terrorist provides a simple illustration of how any enterprise might approach their own information explosion challenges. First, how do agents deal with information today? Basically, they have an information challenge, they implement tools to address that challenge, and they have policies for how to use those tools, as the expression below illustrates:

Information problem --> tools --> policies for using tools --> governance

Now, the challenge with the expression above is that it’s static; it doesn’t take into account the fact that the information problem explodes exponentially, while governance best practices grow linearly. As a result, eventually the quantity of information overwhelms the capabilities of the tools, leading to failures like the explosive in the underwear. Instead, here’s how the expression should work:

Information problem --> tools --> policies for using tools --> metapolicies for dealing with governance --> next-generation governance tools --> best practice approach for dealing with information problem over time

Essentially, the crisis point requires a new level of interaction between human activity and technology capability, a technology-enabled governance feedback loop that promises to enable any enterprise to deal with the information explosion, regardless of whether you’re catching terrorists or pleasing shareholders.

The ZapThink take

Okay, so just how does SOA fit into this story? Remember that as enterprise architecture, SOA consists of a set of best practices for organizing and leveraging IT resources to meet business needs, and the act of applying and enforcing such practices is what we mean by governance. Furthermore, SOA provides a best-practice approach for implementing governance, not just of the services that the SOA implementation supports, but for the organization as a whole.

In essence, SOA leads to a more formal approach to governance, where organizations are able to leverage technology to improve the creation, communication, and enforcement of policies across the board, including those policies that deal with how to automate such governance processes. In the intelligence example, SOA might help agents leverage technology to identify suspicious patterns more effectively by allowing them to craft increasingly sophisticated intelligence policies. In the general case, SOA can lead to more effective management decision making across large organizations.

There is, of course, more to this story. We’ve discussed the problem of too much information before, in our ZapFlash on Net-Centricity, for example. Technology progress leaving people behind is a common thread to all of ZapThink’s research.

If you’re struggling with your own information explosion issues, whether you’re in the intelligence community, the U.S. Department of Defense, or simply struggling with the day-to-day reality that is enterprise IT, drop us a line! Maybe we can help you prevent your next intelligence breach in your organization.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Monday, January 18, 2010

Technical and economic incentives mount for seeking alternatives to costly mainframe applications

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of regional HP virtual conferences:

Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.

A growing number of technical and economic incentives are mounting that make a strong case for modernizing and transforming enterprise mainframe applications -- and the aging infrastructure that support them.

IT budget planners are using the strident economic environment to force a harder look at alternatives to inflexible and hard-to-manage legacy systems, especially as enterprises seek to cut their total and long-term IT operations spending.

The rationale around reducing total costs is also forcing a recognition of the intrinsic difference between core applications and so-called context -- context being applications that are there for commodity productivity reasons, not for core innovation, customization or differentiation.

With a commodity productivity application, the most effective delivery is on the lowest-cost platform or from a provider. The problem is that 20 or 30 years ago, people put everything on mainframes. They wrote it all in code.

The challenge now is how to free up the applications that are not offering any differentiation -- and do not need to be on a mainframe -- and which could be running on a much more lower cost infrastructure, or come from a completely different means of delivery, such as software as a service (SaaS).

There are demonstrably much less expensive ways of delivering such plain vanilla applications and services, and significant financial rewards for separating the core from the context in legacy enterprise implementations.

This discussion is the third and final in a series that examines "Application Transformation: Getting to the Bottom Line." The series coincides with a trio of Hewlett-Packard (HP) virtual conferences on the same subject.
Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.
Helping to examine how alternatives to mainframe computing can work, we're joined by John Pickett, worldwide mainframe modernization program manager at HP; Les Wilson, America's mainframe modernization director at HP, and Paul Evans, worldwide marketing lead on applications transformation at HP. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Evans: We have seen organizations doing a lot with their infrastructure, consolidating it, virtualizing it, all the right things. At the same time, a lot of CIOs or IT directors know that the legacy applications environment has been somewhat ignored.

Now, with the pressure on cost, people are saying, "We've got to do something, but what can come out of that and what is coming out of that?" People are looking at this and saying, "We need to accomplish two things. We need a longer term strategy. We need an operational plan that fits into that, supported by our annual budget."

Foremost is this desire to get away from this ridiculous backlog of application changes, to get more agility into the system, and to get these core applications, which are the ones that provide the differentiation and the innovation for organizations, able to communicate with a far more mobile workforce.

What people have to look at is where we're going strategically with our technology and our business alignment. At the same time, how can we have a short-term plan that starts delivering on some of the real benefits that people can get out there?

... These things have got to pay for themselves. An analyst recently looked me in the face and said, "People want to get off the mainframe. They understand now that the costs associated with it are just not supportable and are not necessary."

One of the sessions from our virtual conference features Geoffrey Moore, where he talks about this whole difference between core applications and context -- context being applications that are there for productivity reasons, not for innovation or differentiation.

Pickett: It's not really just about the overall cost, but it's also about agility, and being able to leverage the existing skills as well.

One of the case studies that I like is from the National Agricultural Cooperative Federation (NACF). It's a mouthful, but take a look at the number of banks that the NACF has. It has 5,500 branches and regional offices, so essentially it's one of the largest banks in Korea.

One of the items that they were struggling with was how to overcome some of the technology and performance limitations of the platform that they had. Certainly, in the banking environment, high availability and making sure that the applications and the services are running were absolutely key.

At the same time, they also knew that the path to the future was going to be through the IT systems that they had and they were managing. What they ended up doing was modernizing their overall environment, essentially moving their core banking structure from their current mainframe environment to a system running HP-UX. It included the customer and account information. They were able to integrate that with the sales and support piece, so they had more of a 360 degree view of the customer.

We talk about reducing costs. In this particular example, they were able to save $40 million on an annual basis. That's nice, and certainly saving that much money is significant, but, at the same time, they were able to improve their system response time two- to three-fold. So, it was a better response for the users.

But, from a business perspective, they were able to reduce their time to market. For developing a new product or service, that they were able to decrease that time from one month to five days.

Makes you more agile

If you are a bank and now you can produce a service much faster than your competition, that certainly makes it a lot easier and makes you a lot more agile. So, the agility is not just for the data center, it's for the business as well.

To take this story just a little bit further, they saw that in addition to the savings I just mentioned, they were able to triple the capacity of the systems in their environment. So, it's not only running faster and being able to have more capacity so you are set for the future, but you are also able to roll out business services a whole lot quicker than you were previously.

... Another example of what we were just talking about is that, if we shift to Europe, Middle East, and Africa region, there is very large insurance company in Spain. It ended up modernizing 14,000 MIPS. Even though the applications had been developed over a number of years and decades, they were able to make the transition in a relatively short length of time. In a three- to six-month time frame they were able to move that forward.

With that, they saw a 2x increase in their batch performance. It's recognized as one of the largest batch re-hosts that are out there. It's just not an HP thing. They worked with Oracle on that as well to be able to drive Oracle 11g within the environment.

Wilson: ... In the virtual conferences, there are also two particular customer case studies worth mentioning. We're seeing a tremendous amount of interest from some of the largest banks in the United States, insurance companies, and benefits management organizations, in particular.

In terms of customer situations, we've always had a very active business working with organizations in manufacturing, retail, and communications. One thing that I've perceived in the last year specifically -- it will come as no surprise to you -- is that financial institutions, and some of the largest ones in the world, are now approaching HP with questions about the commitment they have to their mainframe environments.

We're seeing a tremendous amount of interest from some of the largest banks in the United States, insurance companies, and benefits management organizations, in particular.

Second, maybe benefiting from some of the stimulus funds, a large number of government departments are approaching us as well. We've been very excited by customer interest in financial services and public sector.

The first case study is a project we recently completed at a wood and paper products company, a worldwide concern. In this particular instance we worked with their Americas division on a re-hosting project of applications that are written in the Software AG environment. I hope that many of the listeners will be familiar with the database ADABAS and the language, Natural. These applications were written some years ago, using those Software AG tools.

Demand was lowered

The user company had divested one of the major divisions within the company, and that meant that the demand for mainframe services was dramatically lowered. So, they chose to take the residual applications, the Software AG applications, representing about 300-350 MIPS, and migrate those in their current state, away from the mainframe, to an HP platform.

Many folks listening to this will understand that the Software AG environment can either be transformed and rewritten to run, say, in an Oracle or a Java environment, or we can maintain the customer's investment in the applications and simply migrate the ADABAS and Natural, almost as they are, from the mainframe to an alternative HP infrastructure. The latter is what we did.

By not needing to touch the mainframe code or the business rules, we were able to complete this project in a period of six months, from beginning to end. The user tells us that they are saving over $1 million today in avoiding the large costs associated with mainframe software, as well as maintenance and depreciation on the mainframe environment.

... The more monolithic approach to applications development and maintenance on the mainframe is a model that was probably appropriate in the days of the large conglomerates, where we saw a lot of companies trying to centralize all of that processing in large data centers. This consolidation made a lot of sense, when folks were looking for economies of scale in the mainframe world.

Today, we're seeing customers driving for a higher degree of agility. In fact, my second case study represents that concept in spades. This is a large multinational manufacturing concern. We will just refer to them as "a manufacturing company." They have a large number of businesses in their portfolio.

Our particular customer in this case study is the manufacturer of electronic appliances. One of the driving factors for their mainframe migration was ... to divest themselves from the large mainframe corporate environment, where most of the processing had been done for the last 20 years.

They wanted control of their own destiny to a certain extent, and they also wanted to prepare themselves for potential investment, divestment, and acquisition, just to make sure that they were masters of their own future.

Pickett: ... Just within the past few months, there was a survey by AFCOM, a group that represents data-center workers. It indicated that, over the next two years, 46 percent of the mainframe users said that they're considering replacing one or more of their mainframes.

Now, let that sink in -- 46 percent say they are going to be replacing high-end systems over the next two years. That's an absurdly high number. So, it certainly points to a trend that we are seeing in that particular environment -- not a blip at all.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of regional HP virtual conferences:

Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.