Saturday, January 30, 2010

Time to give server virtualization's twin, storage virtualization, a top place at IT efficiency table

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

The latest BriefingsDirect podcast discussion hones in on storage virtualization. You've heard a lot about server virtualization over the past few years, and many enterprises have adopted virtual servers to improve their ability to manage runtime workloads and high utilization rates to cut total cost.

But, as a sibling to server virtualization, storage virtualization has some strong benefits of its own, not the least of which is the ability to better support server virtualization and make it more successful.

We'll look at how storage virtualization works, where it fits in, and why it makes a lot of sense. The cost savings metrics alone caught me by surprise, making me question why we haven't been talking about storage and server virtualization efforts in the same breath over these past several years.

Here to help understand how to better take advantage of storage virtualization, we're joined by Mike Koponen, HP's StorageWorks Worldwide Solutions marketing manager. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Koponen: Storage requirements aren’t letting up from regulatory requirements, expansion, 24x7 business environments, and the explosion of multimedia. Storage growth is certainly not stopping due to a slowed down economy.

So enterprises need to boost efficiencies from their existing assets as well as the future assets they're going to acquire and then to look for ways to cut capital and operating expenditures. That's really where storage virtualization fits in.

We found that in a lot of businesses they may have as little as 20 percent utilization of their storage capacity. By going to storage virtualization, they can have a 300 percent increase in that existing storage asset utilization, depending upon how it's implemented.

So storage virtualization is a way to increase asset utilization. It's also a way to save on administrative cost, and it's also a way to improve operational efficiencies, as businesses deal with the increasing storage requirements of their businesses. In fact, if businesses don't reevaluate their storage infrastructures at the same time as they're reevaluating their server infrastructures, they really won't realize the full potential of a server virtualization.

In the past, customers would just continue to deploy servers with direct-attached storage (DAS). All of a sudden, they ended up with silos or islands of storage that were more complex to manage and didn't have the agility that you would need to shift storage resources around from application to application.

Then, people moved into deploying network storage or shared storage, storage area networks (SANs) or network-attached storage (NAS) systems and realized a gain in efficiency from that. But, the same can happen. You can end up with islands of SAN systems or NAS systems. Then, to bump things up to the next level of asset utilization, network storage virtualization comes into play.

Now, you can pool all those heterogeneous systems under one common management environment to make it easy to manage and provision these islands of storage that you wound up with.

Studies show swift pay-back

A recent white paper recently done by IDC focuses on the business value of storage virtualization. It looked at a number of factors -- reduced IT labor, reduced hardware and software cost, reduced infrastructure cost, and user productivity improvements. Virtualized storage had a range of payback anywhere from four to six months, based on the type of virtualized storage that was being deployed.

There are different needs or requirements that drive the use of storage virtualization and also different benefits. It may be flexible allocation of tiered storage, so you can move data to different tiers of storage based upon its importance and upon how fast you want to access it. You can take less business-critical information that you need to access less frequently and put it on lower cost storage.

The other might be that you just need more efficient snap-shotting, a replication of things, to provide the right degree of data protection to your business. It's a function of understanding what the top business needs are and then finding the right type of storage virtualization that matches those.

In order to take advantage of the advanced capabilities of server virtualization, such as being able to do live migration of virtual machines and to put in place high availability infrastructures, advanced server virtualization require some form of shared storage.

So, in some sense, it's a base requirement that you need shared storage. But, what we've experienced is that, when you do server virtualization, it places some unique requirements on your storage infrastructure in terms of high availability and performance loads.

Server virtualization drives the creation of more data from the standpoint of more snapshots, more replicas, and things like that. So, you can quickly consume a lot of storage, if you don't have an efficient storage management scheme in place.



Server virtualization drives the creation of more data from the standpoint of more snapshots, more replicas, and things like that. So, you can quickly consume a lot of storage, if you don't have an efficient storage management scheme in place.

And, there's manageability too. Virtual server environments are extremely flexible. It's much easier to deploy new applications. You need a storage infrastructure that is equally as easy to manage, so that you can provision new storage just as quickly as you can provision new servers.

As a result, you certainly get an increased degree of data protection by being able to meet backup windows and not having to compromise the amount of information you back up, because you're trying to squeeze more backups through a limited number of physical servers. When you do server virtualization, you're reducing the number of physical servers and running more virtual ones on top of that reduced number.

You might be trying to move same number of backups through a fewer number of physical servers. You also then end up with this higher degree of data protection, because with a virtualized server storage environment you can still achieve the volume of backups you need in a shorter window.

From an HP portfolio standpoint, we have some innovative products like the HP LeftHand SAN system that's based on a clustered storage architecture, where data is striped across the arrays and the cluster. If a single array goes down in the cluster, the volume is still online and available to your virtual server environment, so that high degree of application availability is maintained.

For people who want to learn more about storage virtualization and what HP has to offer to improve their business returns, I suggest, they go to www.hp.com/go/storagevirtualization. There they can learn about the different types of storage virtualization technologies available. There are also some assets on that website to help them with the justification of putting storage virtualization within their companies.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Friday, January 29, 2010

Security skills provide top draw across still challenging U.S. IT jobs landscape

Listen to the podcast. Read a full transcript or download a copy. Find it on iTunes/iPod and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Gain additional data and analysis from Foote Partners on the IT jobs market.

The latest BriefingsDirect Analyst Insights Edition, Volume 48, centers on the IT job landscape for 2010. We interview David Foote, CEO and chief research officer, as well as co-founder, at Foote Partners LLC of Vero Beach, Fla.

David closely tracks the hiring and human resources trends across the IT landscape. He'll share his findings of where the recession has taken IT hiring and where the recovery will shape up. We'll also look at what skills are going to be in demand and which ones are not. David will help those in IT, or those seeking to enter IT, identify where the new job opportunities lie.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS business process management system, and through the support of TIBCO Software. I'm your host and moderator Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
I co-founded this company with a former senior partner at McKinsey. We developed a number of products and took them out in 1997. We not only have that big IT executive and trends focus as analysts, but also very much a business focus.

We've also populated this company with people from the HR industry, because one of the products we are best known for is the tracking of pay and demand for IT salaries and skills.

We have a proprietary database -- which I'll be drawing from today -- of about 2,000 companies in the U.S. and Canada. It covers about 95,000 IT workers. We use this base to monitor trends and to collect information about compensation and attitudes and what executives are thinking about as they manage IT departments.

For many years, IT people were basically people with deep technical skills in a lot of areas of infrastructure, systems, network, and communications. Then, the Internet happened.

All of a sudden, huge chunks of the budget in IT moved into lines of business. That opened the door for a lot of IT talent that wasn't simply defined as technical, but also customer facing and with knowledge of the business, the industry, and solutions. We've been seeing a maturation of that all along.

What's happened in the last three years is that, when we talk about workforce issues and trends, the currency in IT is much more skills versus jobs, and part of what's inched that along has been outsourcing.

If you need to get something done, you can certainly purchase that and hire people full-time or you can rent it by going anywhere in the world, Vietnam, Southeast Asia, India, or many other places. Essentially, you are just purchasing a market basket of skills. Or, these days, you can give it over to somebody, and by that I mean managed services, which is the new form of what has been traditionally called outsourcing.

It's not so much about hiring, but about how we determine what skills we need, how we find those, and how we execute. What's really happened in two or three years is that the speed at which decisions are made and then implemented has gotten to the point where you have to make decisions in a matter of days and weeks, and not months.

Resisting the temptation

There have been some interesting behaviors during this recession that I haven't seen in prior recessions. That lead me to believe that people have really resisted the temptation to reduce cost at the expense of what the organization will look like in 2011 or 2012, when we are past this recession and are back into business as usual.

People have learned something. That's been a big difference in the last three years. ... Unemployment in IT is usually half of what it is in the general job market, if you look at Bureau of Labor Statistics (BLS) numbers. I can tell you right now that jobs, in terms of unemployment in IT, have really stabilized.

In the last three months [of 2009] there was a net gain of 11,200 jobs in these five [IT] categories. If you look at the previous eight months, prior to September, there was a loss of 31,000 jobs.


So going into 2010, the services industry will absolutely be looking for talent. There's going to be probably a greater need for consultants, and companies looking for help in a lot of the execution. That's because there are still a lot of hiring restrictions out there right now. Companies simply cannot go to the market to find bodies, even if they wanted to.

Companies are still very nervous about hiring, or to put it this way, investing in full-time talent, when the overhead on a full-time worker is usually 80-100 percent of their salaries. If they can find that talent somewhere else, they are going to hire it.

There are certain areas, for example, like security, where there is a tendency to not want to hire talent outside, because this is too important to a company. There are certain legacy skills that are important, but in terms of things like security, a lot of the managed services that have been purchased in 2009 were small- to medium-sized companies that simply don't have big IT staffs.

If you have 5,000, 6,000, or 7,000 people working in IT, you're probably going to do a lot of your own security, but small and medium size have not, and that's an extremely hot area right now to be working in.

We track the value of skills and premium pay for skills, and the only segment of IT that has actually gained value, since the recession started in 2007, is security, and it has been progressive. We haven't seen a downturn in its value in one quarter.

High demand for security certification

Since 2007, when this recession started, overall the market value of security certs is up 3 percent. But if you look at all 200 certified skills that we track in this survey that we do of 406 skills, overall skills have dropped about 6.5 percent in value, but security certifications are up 2.9.

It is a tremendous place to be right now. We've asked people exactly what skills they're hiring, and they have given us this list: forensics, identity and access management, intrusion detection and prevention systems, disk file-level encryption solutions, including removable media, data leakage prevention, biometrics, web content filters, VoIP security, some application security, particularly in small to medium sized companies (SMBs), and governance, compliance, and audit, of course.

The public sector has been on a real tear. As you do, we get a lot of privileged information. One of the things that we have heard from a number of sources, I can't tell you the reason why, is that a lot of recruiting is happening in the private sector right now with the National Security Agency and Homeland Security -- in-the-trenches people.

I think there was a feeling that there weren't enough real deep technical, in-the-trenches kind of talent, in security. There were a lot of policy people, but not enough actual talent. Because of the Cyber Security Initiative, particularly under the current administration, there has been a lot of hiring.

Managed services looks like one of the hottest areas right now, especially in networking and communication: Metro Ethernet, VPNs, IP voice, and wireless security. And if you look at the wireless security market right now, it’s a $9 billion market in Europe. It’s a $5.7 billion market in Asia-Pacific. But in North America it’s between $4 and 5 billion.

There's a lot of activity in wireless security. We have to go right down into every one of these segments. I could give you an idea of where the growth is spurting right now. North America is not leading a lot of this. Other parts of the world are leading this, which gives our companies opportunities to play in those markets as well.

For many years, as you know, Dana, it was everybody taking on America, but now America is taking on the rest of the world. They're looking at opportunities abroad, and that’s had a bigger impact on labor as well. If you're building products and forming alliances and partnerships with companies abroad, you're using their talent and you're using your talent in their countries. There is this global labor arbitrage, global workforce, that companies have right now, and not just the North American workforce.
Listen to the podcast. Read a full transcript or download a copy. Find it on iTunes/iPod and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Gain additional data and analysis from Foote Partners on the IT jobs market.

Apple and Oracle on way to do what IBM and Microsoft could not: Dominate entire markets

I was a bit distracted from the Apple iPad news due to the marathon Oracle conference Wednesday on its shiny new Sun Microsystems acquisition.

But the more I thought about it, the more these two companies are extremely well positioned to actually fulfill what other powerful companies tried to do and failed. Apple and Oracle may be unstoppable in their burgeoning power to dominate the collection of profits across vast and essential markets for decades.

Apple is well on the way to dominating the way that multimedia content is priced and distributed, perhaps unlike any company since Hearst in its 1920s heyday. Apple is not killing the old to usher in the new, as Google is. Apple is rescuing the old media models with a viable online direct payment model. Then it will take all the real dough.

The iPad is a red herring, almost certainly a loss leader, like Apple TV. The real business is brokering a critical mass of music, spoken word, movies, TV, books, magazines, and newspapers. All the digital content that's fit to access. The iPad simply helps convince the producers and consumers to take the iTunes and App Store model into the domain of the formerly printed word. It should work, too.

Oracle is off to becoming the one-stop shop for mission-critical enterprise IT ... as a service. IT can come as an Oracle-provided service, from soup to nuts, applications to silicon. The "service" is that you only need go to Oracle, and that the stuff actually works well. Just leave the driving to Oracle. It should work, too.

This is a mighty attractive bid right now to a lot of corporations. The in-house suppliers of raw compute infrastructure resources are caught in a huge, decades-in-the-making vice -- of needing to cut costs, manage energy, reduce risk and back off of complexity. Can't do that under the status quo.

In doing complete IT package gig, Oracle has signaled the end of the best-of-breed, heterogeneous, and perhaps open source components era of IT. In the new IT era, services are king. The way you actually serve or acquire them is far less of a concern. Enterprises focus on the business and the IT comes, well, like electricity.

This is why "cloud" makes no sense to Oracle's CEO Larry Ellison. He'd rather we take out the word "cloud" from cloud computing and replace it with "Oracle." Now that makes sense!

All the necessary ingredients

Oracle has all the major parts and smarts it needs to do this, by the way. Oracle may need an acquisition or two more for better management and perhaps hosting. But that's about it.

Like Apple, Oracle is not killing the old IT era to usher in the new. Oracle is rescuing the old IT models with a viable complete IT acquisition model. Then it too will take all the real dough.

Incidentally, IBM tired to, and came quite close to a similar variety of enterprise IT domination. That was more than 30 years ago. IBM was an era or two too early. Microsoft tried, and came moderately close -- at least in vision -- to the same thing, moving from the desktop backward into the data center. But, alas, Microsoft was also an era too early.

Both Sun and IBM were seduced over the past 15 years by the interchangeable parts version of IT ... It's what Java is all about. Microsoft hated Java, never veered from their all-us-or-nothing mantle, which is now passing to Oracle. But Microsoft never had the heft in the core enterprise data center to pull it off. Oracle does.

Yes, Apple and Oracle have clearly learned well from their brethren. And the timing has never been better, the recession a god-send.

So now as consumers, we have some big choices .... er, actually maybe we have a big buy-in, yes, but maybe not too much in the way of choices. As any mainstream consumer and producer of media, I will really need to do business with Apple. Not too much choice. Convenience across the content supply chain has become the killer app. And I love it all the way.

I want my MTV, my New York Times, my Mahler and my Madmen. Apple gets it to me as I wish at an acceptable price. Case closed. The end device is not so important any more, be it big, medium or small, be it Mac or PC. Because of my full-bore consumer seduction, the producers of the content need to follow the gold Apple ring. Same for consumer applications and games, though they are all fundamentally content.

As an IT services buyer, Oracle is making a similar offer. Convenience is killer for IT managers too. Oracle, through its appliances, integrated stack, data ecosystem, tuned high-end hardware, business applications, business intelligence, and sales account heft, leaves me breathless. And taking a next breath will probably have an Oracle SLA attached. Whew!

Critical mass in the accounts that matter

Oracle is already irreplaceable in all -- and I mean all -- the major enterprise accounts. Oracle can substantially now reduce complexity across the IT infrastructure front, while seemingly cutting costs, apparently reducing risk. But a huge portion of the total savings goes into Oracle's pockets, making it stronger in more ways in more accounts for 20 years. Now they can take the lion's share of the profits in the IT as a service era. I call that dominance.

So let's hear it for the balancing acts still standing. Go IBM! Go Microsoft! Go Google! Go HP! Go SAP! How about Cisco and EMC? You all go for as long as you can, please. Or at least as long as it takes for the next IT and media eras to arrive. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

These handful of companies are about the only insurance policies against Apple and Oracle being able to price with impunity across vast markets that deeply affect us all.

Wednesday, January 27, 2010

Oracle's Sun Java strategy: Business as usual

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

In an otherwise pretty packed news day, we’d like to echo @mdl4’s sentiments about the respective importance of Apple’s and Oracle’s announcements: “Oracle finalized its purchase of Sun. Best thing to happen to Sun since Java. Also: I don’t give a sh#t about the iPad. I said it.”

There’s little new in observing that on the platform side, that Oracle’s acquisition of Sun is a means for turning the clock back to the days of turnkey systems in a post-appliance era. History truly has come full circle as Oracle in its original database incarnation was one of the prime forces that helped decouple software from hardware.

Fast forward to the present, and customers are tired of complexity and just want things that work. Actually, that idea was responsible for the emergence of specialized appliances over the past decade for performing tasks ranging from SSL encryption/decryption to XML processing, firewalls, email, or specialized web databases.

The implication here is that the concept is elevated to enterprise level; instead of a specialized appliance, it’s your core instance of Oracle databases, middleware, or applications. And even there, it’s but a logical step forward from Oracle’s past practice of certifying specific configurations of its database on Sun (Sun was, and now has become again, Oracle’s reference development platform).

That’s in essence the argument for Oracle to latch onto a processor architecture that is overmatched in investment by Intel for the x86 line. The argument could be raised than in an era of growing interest in cloud, as to whether Oracle is fighting the last war. That would be the case – except for the certainty that your data center has just as much chance of dying as your mainframe.

Question of second source

At the end of the day, it’s inevitably a question of second source. Dana Gardner opines that Oracle will replace Microsoft as the hedge to IBM. Gordon Haff contends that alternate platform sources are balkanizing as Cisco/EMC/VMware butts their virtualized x86 head into the picture and customers look to private clouds the way they once idealized grids.

The highlight for us was what happens to Sun’s Java portfolio, and as it turns out, the results are not far from what we anticipated last spring: Oracle’s products remain the flagship offerings. From looking at respective market shares, it would be pretty crazy for Oracle to have done otherwise.

The general theme was that – yes – Sun’s portfolio will remain the “reference” technologies for the JCP standards, but that these are really only toys that developers should play with. When they get serious, they’re going to keep using WebLogic, not Glassfish. Ditto for:

• Java software development. You can play around with NetBeans, which Oracle’s middleware chief Thomas Kurian characterized as a “lightweight development environment,” but again, if you really want to develop enterprise-ready apps for the Oracle platform, you will still use JDeveloper, which of course is written for Oracle’s umbrella ADF framework that underlies its database, middleware, and applications offerings. That’s identical to Oracle’s existing posture with the old (mostly) BEA portfolio of Eclipse developer tools. Actually, the only thing that surprised us was that Oracle didn’t simply take NetBeans and set it free – as in donating it to Apache or some more obscure open source body.

The more relevant question for MySQL is whether Oracle will fork development to favor Solaris on SPARC. This being open source, there would be nothing stopping the community from taking the law into its own hands.



• SOA, where Oracle’s SOA Suite remains front and center while Sun’s offerings go on maintenance.

We’re also not surprised as tot he prominent role of JavaFX in Oracle’s RIA plans; it fills a vacumm created when Oracle tgerminated BEA’s former arrangement to bundle Adobe Flash/Flex development tooling. Inactualityy, Oracle has become RIA agnosatic, as ADF could support any of the frameworks for client display, but JavaFX provides a technology that Oracle can call its own.

There were some interesting distinctions with identity management and access, where Sun inherited some formidable technologies that, believe it or not, originated with Netscape. Oracle Identity management will grab some provisioning technology from the Sun stack, but otherwise Oracle’s suite will remain the core attraction. But Sun’s identity and access management won’t be put out to pasture, as it will be promoted for midsized web installations.

There are much bigger pieces to Oracle’s announcements, but we’ll finish with what becomes of MySQL. In shirt there’s nothing surprising to the announcement that MySQL will be maintained in a separate open source business unit – the EU would not have allowed otherwise. But we’ve never bought into the story that Oracle would kill MySQL. Both databases aim at different markets. Just about the only difference that Oracle’s ownership of MySQL makes – besides reuniting it under the same corporate umbrella as the InnoDB data store – is that, well, like yeah, MySQL won’t morph into an enterprise database. Then again, even if MySQL had remained independent, that arguably it was never going to evolve to the same class of Oracle as the product would lose its believed simplicity.

The more relevant question for MySQL is whether Oracle will fork development to favor Solaris on SPARC. This being open source, there would be nothing stopping the community from taking the law into its own hands.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Friday, January 22, 2010

The Christmas Day bomber, Moore’s Law, and enterprise IT's new challenges

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

Amid the posturing and recriminations following this past December’s ill-fated terrorist attack by the alleged Nigerian Christmas bomber, the underlying cause of the intelligence breach has gone all but unnoticed.

How is it the global post-9/11 anti-terrorist machine could miss a lone Nigerian with explosives in his underwear? After all, chatter included reference to “the Nigerian,” his own father gave warning, he was on a terrorist watch list, and he purchased a one-way ticket to Detroit, paid cash, and checked no luggage. You’d think any one of these bits of information would set off alarms, and the fact that the intelligence community missed the lot is a sign of sheer incompetence, right?

Not so fast. Such a conclusion is actually fallacious. The missing piece of the puzzle is the fact that there are hundreds of thousands of monthly air travelers, and millions of weekly messages that constitutes he chatter the intelligence community routinely follows. And that watch list? Hundreds of thousands of names, to be sure.

Furthermore, the quantity of information that agents must follow is increasing at an exponential rate. So, while it seems in retrospect that agents missed a huge red flag, in actuality there is so much noise that even the combination of warnings taken together was lost in a sea of noise. A dozen red flags, yes, but could you discern a dozen red grains of sand on a beach?

The true reason behind the intelligence breach is far more subtle than simple incompetence, and furthermore, the solution is just as difficult to discern. The most interesting part of this discussion from ZapThink’s perspective, naturally, is the implication for enterprise IT.

The global intelligence community is but one enterprise among many dealing with exponentially increasing quantities and complexity of information. All other enterprises, in the private as well as public sector, face similar challenges: As Moore’s Law and its corollaries proceed on their inexorable path, what happens when the human ability to deal with the resulting information overload falls short? How can you help your organization keep from getting lost in the noise?

The governance crisis point

Strictly speaking, Moore’s Law states that the number of transistors that current technology can cram onto a chip of a given size will increase exponentially over time. But the transistors on a chip are really only the tip of the iceberg; along with processing power we have exponential growth in hard drive capacity, network speed, and other related measures – what we’re calling corollaries to Moore’s Law. And of course, there’s also the all-important corollary to Murphy’s Law that states that the quantity of information available will naturally expand to fill all available space.

Anybody who remembers the wheat and chessboard problem knows that this explosion of information will lead to problems down the road. IT vendors, of course, have long seen this trend as a huge opportunity, and have risen to the occasion with tools to help organizations manage the burgeoning quantity of information. What vendors cannot do, however, is improve how people deal with this problem.

Fundamentally, human capabilities at best grow linearly. Our brains, after all, are not subject to Moore’s Law, and even so, enterprises depend far more on the interactions among people than on the contributions of individuals taken separately. While the number of transistors may double every 18 months, our management, analysis, and other communication skills will only see gradual improvements at best.

This disconnect leads to what ZapThink calls the governance crisis point, as illustrated in the figure below.

The governance crisis point

The diagram above illustrates the fact that while the quantity and complexity of information in any enterprise grows exponentially, the human ability to deal with that information at best grows linearly. No matter where you put the two curves, eventually the one overtakes the other at the governance crisis point, leading to the “governance crisis point problem”: Eventually, human activities are unable to deal with the quantity and complexity of information.

Unfortunately, no technology can solve this problem, because technology only affects the exponential curve. I’m sure today’s intelligence agents have state-of-the-art analysis tools, since after all, if they don’t have them, then who does? But the bomber was still able to get on the plane.

Furthermore, neither is the solution to this problem a purely human one. We’d clearly be fooling ourselves to think that if only we worked harder or smarter, we might be able to keep up. Equally foolish would be the assumption we might be able to slow down the exponential growth of information. Like it or not, this curve is an inexorable juggernaut.

SOA to the rescue?

Seeing as this article is from ZapThink, you might think that service-oriented architecture (SOA) is the answer to this problem. In fact, SOA plays a support role, but the core of the solution centers on governance, hence the name of the crisis point. Anyone who’s been through our Licensed ZapThink Architect course or our SOA & Cloud Governance course understands that the relationship between SOA and governance is a complex one, as SOA depends upon governance but also enables governance for the organization at large.

Just so with the governance crisis point problem: Neither technology nor human change will solve the problem, but a better approach to formalizing the interactions between people and technology give us a path to the solution. The starting point is to understand that governance involves creating, communicating, and enforcing policies that are important to an organization, and that those policies may be anywhere on a spectrum from human-centric to technology-centric. In the context of SOA, then, the first step is to represent certain policies as metadata, and incorporate those metadata in the organization’s governance framework.

In practice, the governance team sorts the policies within scope of the current project into those policies that are best handled by human interactions and those policies that lend themselves to automation. Representing the latter set of policies as metadata enables the SOA governance infrastructure to automate policy enforcement as well as other policy-based processes. Such policy representations alone, however. cannot solve the governance crisis point problem.

The answer lies in how the governance team deals with policies, in other words, what are their polices regarding policies, or what ZapThink likes to call metapolicies. Working through the organization’s policies for dealing with governance, and automating those policies, gives the organization a “metapolicy feedback loop” approach to leveraging the power of technology to improve governance overall.

Catching terrorists and other IT management challenges

How this metapolicy feedback loop might help intelligence agents catch the next terrorist provides a simple illustration of how any enterprise might approach their own information explosion challenges. First, how do agents deal with information today? Basically, they have an information challenge, they implement tools to address that challenge, and they have policies for how to use those tools, as the expression below illustrates:

Information problem --> tools --> policies for using tools --> governance

Now, the challenge with the expression above is that it’s static; it doesn’t take into account the fact that the information problem explodes exponentially, while governance best practices grow linearly. As a result, eventually the quantity of information overwhelms the capabilities of the tools, leading to failures like the explosive in the underwear. Instead, here’s how the expression should work:

Information problem --> tools --> policies for using tools --> metapolicies for dealing with governance --> next-generation governance tools --> best practice approach for dealing with information problem over time

Essentially, the crisis point requires a new level of interaction between human activity and technology capability, a technology-enabled governance feedback loop that promises to enable any enterprise to deal with the information explosion, regardless of whether you’re catching terrorists or pleasing shareholders.

The ZapThink take

Okay, so just how does SOA fit into this story? Remember that as enterprise architecture, SOA consists of a set of best practices for organizing and leveraging IT resources to meet business needs, and the act of applying and enforcing such practices is what we mean by governance. Furthermore, SOA provides a best-practice approach for implementing governance, not just of the services that the SOA implementation supports, but for the organization as a whole.

In essence, SOA leads to a more formal approach to governance, where organizations are able to leverage technology to improve the creation, communication, and enforcement of policies across the board, including those policies that deal with how to automate such governance processes. In the intelligence example, SOA might help agents leverage technology to identify suspicious patterns more effectively by allowing them to craft increasingly sophisticated intelligence policies. In the general case, SOA can lead to more effective management decision making across large organizations.

There is, of course, more to this story. We’ve discussed the problem of too much information before, in our ZapFlash on Net-Centricity, for example. Technology progress leaving people behind is a common thread to all of ZapThink’s research.

If you’re struggling with your own information explosion issues, whether you’re in the intelligence community, the U.S. Department of Defense, or simply struggling with the day-to-day reality that is enterprise IT, drop us a line! Maybe we can help you prevent your next intelligence breach in your organization.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.