Friday, January 22, 2010

The Christmas Day bomber, Moore’s Law, and enterprise IT's new challenges

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

Amid the posturing and recriminations following this past December’s ill-fated terrorist attack by the alleged Nigerian Christmas bomber, the underlying cause of the intelligence breach has gone all but unnoticed.

How is it the global post-9/11 anti-terrorist machine could miss a lone Nigerian with explosives in his underwear? After all, chatter included reference to “the Nigerian,” his own father gave warning, he was on a terrorist watch list, and he purchased a one-way ticket to Detroit, paid cash, and checked no luggage. You’d think any one of these bits of information would set off alarms, and the fact that the intelligence community missed the lot is a sign of sheer incompetence, right?

Not so fast. Such a conclusion is actually fallacious. The missing piece of the puzzle is the fact that there are hundreds of thousands of monthly air travelers, and millions of weekly messages that constitutes he chatter the intelligence community routinely follows. And that watch list? Hundreds of thousands of names, to be sure.

Furthermore, the quantity of information that agents must follow is increasing at an exponential rate. So, while it seems in retrospect that agents missed a huge red flag, in actuality there is so much noise that even the combination of warnings taken together was lost in a sea of noise. A dozen red flags, yes, but could you discern a dozen red grains of sand on a beach?

The true reason behind the intelligence breach is far more subtle than simple incompetence, and furthermore, the solution is just as difficult to discern. The most interesting part of this discussion from ZapThink’s perspective, naturally, is the implication for enterprise IT.

The global intelligence community is but one enterprise among many dealing with exponentially increasing quantities and complexity of information. All other enterprises, in the private as well as public sector, face similar challenges: As Moore’s Law and its corollaries proceed on their inexorable path, what happens when the human ability to deal with the resulting information overload falls short? How can you help your organization keep from getting lost in the noise?

The governance crisis point

Strictly speaking, Moore’s Law states that the number of transistors that current technology can cram onto a chip of a given size will increase exponentially over time. But the transistors on a chip are really only the tip of the iceberg; along with processing power we have exponential growth in hard drive capacity, network speed, and other related measures – what we’re calling corollaries to Moore’s Law. And of course, there’s also the all-important corollary to Murphy’s Law that states that the quantity of information available will naturally expand to fill all available space.

Anybody who remembers the wheat and chessboard problem knows that this explosion of information will lead to problems down the road. IT vendors, of course, have long seen this trend as a huge opportunity, and have risen to the occasion with tools to help organizations manage the burgeoning quantity of information. What vendors cannot do, however, is improve how people deal with this problem.

Fundamentally, human capabilities at best grow linearly. Our brains, after all, are not subject to Moore’s Law, and even so, enterprises depend far more on the interactions among people than on the contributions of individuals taken separately. While the number of transistors may double every 18 months, our management, analysis, and other communication skills will only see gradual improvements at best.

This disconnect leads to what ZapThink calls the governance crisis point, as illustrated in the figure below.

The governance crisis point

The diagram above illustrates the fact that while the quantity and complexity of information in any enterprise grows exponentially, the human ability to deal with that information at best grows linearly. No matter where you put the two curves, eventually the one overtakes the other at the governance crisis point, leading to the “governance crisis point problem”: Eventually, human activities are unable to deal with the quantity and complexity of information.

Unfortunately, no technology can solve this problem, because technology only affects the exponential curve. I’m sure today’s intelligence agents have state-of-the-art analysis tools, since after all, if they don’t have them, then who does? But the bomber was still able to get on the plane.

Furthermore, neither is the solution to this problem a purely human one. We’d clearly be fooling ourselves to think that if only we worked harder or smarter, we might be able to keep up. Equally foolish would be the assumption we might be able to slow down the exponential growth of information. Like it or not, this curve is an inexorable juggernaut.

SOA to the rescue?

Seeing as this article is from ZapThink, you might think that service-oriented architecture (SOA) is the answer to this problem. In fact, SOA plays a support role, but the core of the solution centers on governance, hence the name of the crisis point. Anyone who’s been through our Licensed ZapThink Architect course or our SOA & Cloud Governance course understands that the relationship between SOA and governance is a complex one, as SOA depends upon governance but also enables governance for the organization at large.

Just so with the governance crisis point problem: Neither technology nor human change will solve the problem, but a better approach to formalizing the interactions between people and technology give us a path to the solution. The starting point is to understand that governance involves creating, communicating, and enforcing policies that are important to an organization, and that those policies may be anywhere on a spectrum from human-centric to technology-centric. In the context of SOA, then, the first step is to represent certain policies as metadata, and incorporate those metadata in the organization’s governance framework.

In practice, the governance team sorts the policies within scope of the current project into those policies that are best handled by human interactions and those policies that lend themselves to automation. Representing the latter set of policies as metadata enables the SOA governance infrastructure to automate policy enforcement as well as other policy-based processes. Such policy representations alone, however. cannot solve the governance crisis point problem.

The answer lies in how the governance team deals with policies, in other words, what are their polices regarding policies, or what ZapThink likes to call metapolicies. Working through the organization’s policies for dealing with governance, and automating those policies, gives the organization a “metapolicy feedback loop” approach to leveraging the power of technology to improve governance overall.

Catching terrorists and other IT management challenges

How this metapolicy feedback loop might help intelligence agents catch the next terrorist provides a simple illustration of how any enterprise might approach their own information explosion challenges. First, how do agents deal with information today? Basically, they have an information challenge, they implement tools to address that challenge, and they have policies for how to use those tools, as the expression below illustrates:

Information problem --> tools --> policies for using tools --> governance

Now, the challenge with the expression above is that it’s static; it doesn’t take into account the fact that the information problem explodes exponentially, while governance best practices grow linearly. As a result, eventually the quantity of information overwhelms the capabilities of the tools, leading to failures like the explosive in the underwear. Instead, here’s how the expression should work:

Information problem --> tools --> policies for using tools --> metapolicies for dealing with governance --> next-generation governance tools --> best practice approach for dealing with information problem over time

Essentially, the crisis point requires a new level of interaction between human activity and technology capability, a technology-enabled governance feedback loop that promises to enable any enterprise to deal with the information explosion, regardless of whether you’re catching terrorists or pleasing shareholders.

The ZapThink take

Okay, so just how does SOA fit into this story? Remember that as enterprise architecture, SOA consists of a set of best practices for organizing and leveraging IT resources to meet business needs, and the act of applying and enforcing such practices is what we mean by governance. Furthermore, SOA provides a best-practice approach for implementing governance, not just of the services that the SOA implementation supports, but for the organization as a whole.

In essence, SOA leads to a more formal approach to governance, where organizations are able to leverage technology to improve the creation, communication, and enforcement of policies across the board, including those policies that deal with how to automate such governance processes. In the intelligence example, SOA might help agents leverage technology to identify suspicious patterns more effectively by allowing them to craft increasingly sophisticated intelligence policies. In the general case, SOA can lead to more effective management decision making across large organizations.

There is, of course, more to this story. We’ve discussed the problem of too much information before, in our ZapFlash on Net-Centricity, for example. Technology progress leaving people behind is a common thread to all of ZapThink’s research.

If you’re struggling with your own information explosion issues, whether you’re in the intelligence community, the U.S. Department of Defense, or simply struggling with the day-to-day reality that is enterprise IT, drop us a line! Maybe we can help you prevent your next intelligence breach in your organization.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Monday, January 18, 2010

Technical and economic incentives mount for seeking alternatives to costly mainframe applications

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of regional HP virtual conferences:

Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.

A growing number of technical and economic incentives are mounting that make a strong case for modernizing and transforming enterprise mainframe applications -- and the aging infrastructure that support them.

IT budget planners are using the strident economic environment to force a harder look at alternatives to inflexible and hard-to-manage legacy systems, especially as enterprises seek to cut their total and long-term IT operations spending.

The rationale around reducing total costs is also forcing a recognition of the intrinsic difference between core applications and so-called context -- context being applications that are there for commodity productivity reasons, not for core innovation, customization or differentiation.

With a commodity productivity application, the most effective delivery is on the lowest-cost platform or from a provider. The problem is that 20 or 30 years ago, people put everything on mainframes. They wrote it all in code.

The challenge now is how to free up the applications that are not offering any differentiation -- and do not need to be on a mainframe -- and which could be running on a much more lower cost infrastructure, or come from a completely different means of delivery, such as software as a service (SaaS).

There are demonstrably much less expensive ways of delivering such plain vanilla applications and services, and significant financial rewards for separating the core from the context in legacy enterprise implementations.

This discussion is the third and final in a series that examines "Application Transformation: Getting to the Bottom Line." The series coincides with a trio of Hewlett-Packard (HP) virtual conferences on the same subject.
Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.
Helping to examine how alternatives to mainframe computing can work, we're joined by John Pickett, worldwide mainframe modernization program manager at HP; Les Wilson, America's mainframe modernization director at HP, and Paul Evans, worldwide marketing lead on applications transformation at HP. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Evans: We have seen organizations doing a lot with their infrastructure, consolidating it, virtualizing it, all the right things. At the same time, a lot of CIOs or IT directors know that the legacy applications environment has been somewhat ignored.

Now, with the pressure on cost, people are saying, "We've got to do something, but what can come out of that and what is coming out of that?" People are looking at this and saying, "We need to accomplish two things. We need a longer term strategy. We need an operational plan that fits into that, supported by our annual budget."

Foremost is this desire to get away from this ridiculous backlog of application changes, to get more agility into the system, and to get these core applications, which are the ones that provide the differentiation and the innovation for organizations, able to communicate with a far more mobile workforce.

What people have to look at is where we're going strategically with our technology and our business alignment. At the same time, how can we have a short-term plan that starts delivering on some of the real benefits that people can get out there?

... These things have got to pay for themselves. An analyst recently looked me in the face and said, "People want to get off the mainframe. They understand now that the costs associated with it are just not supportable and are not necessary."

One of the sessions from our virtual conference features Geoffrey Moore, where he talks about this whole difference between core applications and context -- context being applications that are there for productivity reasons, not for innovation or differentiation.

Pickett: It's not really just about the overall cost, but it's also about agility, and being able to leverage the existing skills as well.

One of the case studies that I like is from the National Agricultural Cooperative Federation (NACF). It's a mouthful, but take a look at the number of banks that the NACF has. It has 5,500 branches and regional offices, so essentially it's one of the largest banks in Korea.

One of the items that they were struggling with was how to overcome some of the technology and performance limitations of the platform that they had. Certainly, in the banking environment, high availability and making sure that the applications and the services are running were absolutely key.

At the same time, they also knew that the path to the future was going to be through the IT systems that they had and they were managing. What they ended up doing was modernizing their overall environment, essentially moving their core banking structure from their current mainframe environment to a system running HP-UX. It included the customer and account information. They were able to integrate that with the sales and support piece, so they had more of a 360 degree view of the customer.

We talk about reducing costs. In this particular example, they were able to save $40 million on an annual basis. That's nice, and certainly saving that much money is significant, but, at the same time, they were able to improve their system response time two- to three-fold. So, it was a better response for the users.

But, from a business perspective, they were able to reduce their time to market. For developing a new product or service, that they were able to decrease that time from one month to five days.

Makes you more agile

If you are a bank and now you can produce a service much faster than your competition, that certainly makes it a lot easier and makes you a lot more agile. So, the agility is not just for the data center, it's for the business as well.

To take this story just a little bit further, they saw that in addition to the savings I just mentioned, they were able to triple the capacity of the systems in their environment. So, it's not only running faster and being able to have more capacity so you are set for the future, but you are also able to roll out business services a whole lot quicker than you were previously.

... Another example of what we were just talking about is that, if we shift to Europe, Middle East, and Africa region, there is very large insurance company in Spain. It ended up modernizing 14,000 MIPS. Even though the applications had been developed over a number of years and decades, they were able to make the transition in a relatively short length of time. In a three- to six-month time frame they were able to move that forward.

With that, they saw a 2x increase in their batch performance. It's recognized as one of the largest batch re-hosts that are out there. It's just not an HP thing. They worked with Oracle on that as well to be able to drive Oracle 11g within the environment.

Wilson: ... In the virtual conferences, there are also two particular customer case studies worth mentioning. We're seeing a tremendous amount of interest from some of the largest banks in the United States, insurance companies, and benefits management organizations, in particular.

In terms of customer situations, we've always had a very active business working with organizations in manufacturing, retail, and communications. One thing that I've perceived in the last year specifically -- it will come as no surprise to you -- is that financial institutions, and some of the largest ones in the world, are now approaching HP with questions about the commitment they have to their mainframe environments.

We're seeing a tremendous amount of interest from some of the largest banks in the United States, insurance companies, and benefits management organizations, in particular.

Second, maybe benefiting from some of the stimulus funds, a large number of government departments are approaching us as well. We've been very excited by customer interest in financial services and public sector.

The first case study is a project we recently completed at a wood and paper products company, a worldwide concern. In this particular instance we worked with their Americas division on a re-hosting project of applications that are written in the Software AG environment. I hope that many of the listeners will be familiar with the database ADABAS and the language, Natural. These applications were written some years ago, using those Software AG tools.

Demand was lowered

The user company had divested one of the major divisions within the company, and that meant that the demand for mainframe services was dramatically lowered. So, they chose to take the residual applications, the Software AG applications, representing about 300-350 MIPS, and migrate those in their current state, away from the mainframe, to an HP platform.

Many folks listening to this will understand that the Software AG environment can either be transformed and rewritten to run, say, in an Oracle or a Java environment, or we can maintain the customer's investment in the applications and simply migrate the ADABAS and Natural, almost as they are, from the mainframe to an alternative HP infrastructure. The latter is what we did.

By not needing to touch the mainframe code or the business rules, we were able to complete this project in a period of six months, from beginning to end. The user tells us that they are saving over $1 million today in avoiding the large costs associated with mainframe software, as well as maintenance and depreciation on the mainframe environment.

... The more monolithic approach to applications development and maintenance on the mainframe is a model that was probably appropriate in the days of the large conglomerates, where we saw a lot of companies trying to centralize all of that processing in large data centers. This consolidation made a lot of sense, when folks were looking for economies of scale in the mainframe world.

Today, we're seeing customers driving for a higher degree of agility. In fact, my second case study represents that concept in spades. This is a large multinational manufacturing concern. We will just refer to them as "a manufacturing company." They have a large number of businesses in their portfolio.

Our particular customer in this case study is the manufacturer of electronic appliances. One of the driving factors for their mainframe migration was ... to divest themselves from the large mainframe corporate environment, where most of the processing had been done for the last 20 years.

They wanted control of their own destiny to a certain extent, and they also wanted to prepare themselves for potential investment, divestment, and acquisition, just to make sure that they were masters of their own future.

Pickett: ... Just within the past few months, there was a survey by AFCOM, a group that represents data-center workers. It indicated that, over the next two years, 46 percent of the mainframe users said that they're considering replacing one or more of their mainframes.

Now, let that sink in -- 46 percent say they are going to be replacing high-end systems over the next two years. That's an absurdly high number. So, it certainly points to a trend that we are seeing in that particular environment -- not a blip at all.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of regional HP virtual conferences:

Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.