Friday, January 22, 2010

The Christmas Day bomber, Moore’s Law, and enterprise IT's new challenges

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

Amid the posturing and recriminations following this past December’s ill-fated terrorist attack by the alleged Nigerian Christmas bomber, the underlying cause of the intelligence breach has gone all but unnoticed.

How is it the global post-9/11 anti-terrorist machine could miss a lone Nigerian with explosives in his underwear? After all, chatter included reference to “the Nigerian,” his own father gave warning, he was on a terrorist watch list, and he purchased a one-way ticket to Detroit, paid cash, and checked no luggage. You’d think any one of these bits of information would set off alarms, and the fact that the intelligence community missed the lot is a sign of sheer incompetence, right?

Not so fast. Such a conclusion is actually fallacious. The missing piece of the puzzle is the fact that there are hundreds of thousands of monthly air travelers, and millions of weekly messages that constitutes he chatter the intelligence community routinely follows. And that watch list? Hundreds of thousands of names, to be sure.

Furthermore, the quantity of information that agents must follow is increasing at an exponential rate. So, while it seems in retrospect that agents missed a huge red flag, in actuality there is so much noise that even the combination of warnings taken together was lost in a sea of noise. A dozen red flags, yes, but could you discern a dozen red grains of sand on a beach?

The true reason behind the intelligence breach is far more subtle than simple incompetence, and furthermore, the solution is just as difficult to discern. The most interesting part of this discussion from ZapThink’s perspective, naturally, is the implication for enterprise IT.

The global intelligence community is but one enterprise among many dealing with exponentially increasing quantities and complexity of information. All other enterprises, in the private as well as public sector, face similar challenges: As Moore’s Law and its corollaries proceed on their inexorable path, what happens when the human ability to deal with the resulting information overload falls short? How can you help your organization keep from getting lost in the noise?

The governance crisis point

Strictly speaking, Moore’s Law states that the number of transistors that current technology can cram onto a chip of a given size will increase exponentially over time. But the transistors on a chip are really only the tip of the iceberg; along with processing power we have exponential growth in hard drive capacity, network speed, and other related measures – what we’re calling corollaries to Moore’s Law. And of course, there’s also the all-important corollary to Murphy’s Law that states that the quantity of information available will naturally expand to fill all available space.

Anybody who remembers the wheat and chessboard problem knows that this explosion of information will lead to problems down the road. IT vendors, of course, have long seen this trend as a huge opportunity, and have risen to the occasion with tools to help organizations manage the burgeoning quantity of information. What vendors cannot do, however, is improve how people deal with this problem.

Fundamentally, human capabilities at best grow linearly. Our brains, after all, are not subject to Moore’s Law, and even so, enterprises depend far more on the interactions among people than on the contributions of individuals taken separately. While the number of transistors may double every 18 months, our management, analysis, and other communication skills will only see gradual improvements at best.

This disconnect leads to what ZapThink calls the governance crisis point, as illustrated in the figure below.

The governance crisis point

The diagram above illustrates the fact that while the quantity and complexity of information in any enterprise grows exponentially, the human ability to deal with that information at best grows linearly. No matter where you put the two curves, eventually the one overtakes the other at the governance crisis point, leading to the “governance crisis point problem”: Eventually, human activities are unable to deal with the quantity and complexity of information.

Unfortunately, no technology can solve this problem, because technology only affects the exponential curve. I’m sure today’s intelligence agents have state-of-the-art analysis tools, since after all, if they don’t have them, then who does? But the bomber was still able to get on the plane.

Furthermore, neither is the solution to this problem a purely human one. We’d clearly be fooling ourselves to think that if only we worked harder or smarter, we might be able to keep up. Equally foolish would be the assumption we might be able to slow down the exponential growth of information. Like it or not, this curve is an inexorable juggernaut.

SOA to the rescue?

Seeing as this article is from ZapThink, you might think that service-oriented architecture (SOA) is the answer to this problem. In fact, SOA plays a support role, but the core of the solution centers on governance, hence the name of the crisis point. Anyone who’s been through our Licensed ZapThink Architect course or our SOA & Cloud Governance course understands that the relationship between SOA and governance is a complex one, as SOA depends upon governance but also enables governance for the organization at large.

Just so with the governance crisis point problem: Neither technology nor human change will solve the problem, but a better approach to formalizing the interactions between people and technology give us a path to the solution. The starting point is to understand that governance involves creating, communicating, and enforcing policies that are important to an organization, and that those policies may be anywhere on a spectrum from human-centric to technology-centric. In the context of SOA, then, the first step is to represent certain policies as metadata, and incorporate those metadata in the organization’s governance framework.

In practice, the governance team sorts the policies within scope of the current project into those policies that are best handled by human interactions and those policies that lend themselves to automation. Representing the latter set of policies as metadata enables the SOA governance infrastructure to automate policy enforcement as well as other policy-based processes. Such policy representations alone, however. cannot solve the governance crisis point problem.

The answer lies in how the governance team deals with policies, in other words, what are their polices regarding policies, or what ZapThink likes to call metapolicies. Working through the organization’s policies for dealing with governance, and automating those policies, gives the organization a “metapolicy feedback loop” approach to leveraging the power of technology to improve governance overall.

Catching terrorists and other IT management challenges

How this metapolicy feedback loop might help intelligence agents catch the next terrorist provides a simple illustration of how any enterprise might approach their own information explosion challenges. First, how do agents deal with information today? Basically, they have an information challenge, they implement tools to address that challenge, and they have policies for how to use those tools, as the expression below illustrates:

Information problem --> tools --> policies for using tools --> governance

Now, the challenge with the expression above is that it’s static; it doesn’t take into account the fact that the information problem explodes exponentially, while governance best practices grow linearly. As a result, eventually the quantity of information overwhelms the capabilities of the tools, leading to failures like the explosive in the underwear. Instead, here’s how the expression should work:

Information problem --> tools --> policies for using tools --> metapolicies for dealing with governance --> next-generation governance tools --> best practice approach for dealing with information problem over time

Essentially, the crisis point requires a new level of interaction between human activity and technology capability, a technology-enabled governance feedback loop that promises to enable any enterprise to deal with the information explosion, regardless of whether you’re catching terrorists or pleasing shareholders.

The ZapThink take

Okay, so just how does SOA fit into this story? Remember that as enterprise architecture, SOA consists of a set of best practices for organizing and leveraging IT resources to meet business needs, and the act of applying and enforcing such practices is what we mean by governance. Furthermore, SOA provides a best-practice approach for implementing governance, not just of the services that the SOA implementation supports, but for the organization as a whole.

In essence, SOA leads to a more formal approach to governance, where organizations are able to leverage technology to improve the creation, communication, and enforcement of policies across the board, including those policies that deal with how to automate such governance processes. In the intelligence example, SOA might help agents leverage technology to identify suspicious patterns more effectively by allowing them to craft increasingly sophisticated intelligence policies. In the general case, SOA can lead to more effective management decision making across large organizations.

There is, of course, more to this story. We’ve discussed the problem of too much information before, in our ZapFlash on Net-Centricity, for example. Technology progress leaving people behind is a common thread to all of ZapThink’s research.

If you’re struggling with your own information explosion issues, whether you’re in the intelligence community, the U.S. Department of Defense, or simply struggling with the day-to-day reality that is enterprise IT, drop us a line! Maybe we can help you prevent your next intelligence breach in your organization.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Monday, January 18, 2010

Technical and economic incentives mount for seeking alternatives to costly mainframe applications

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of regional HP virtual conferences:

Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.

A growing number of technical and economic incentives are mounting that make a strong case for modernizing and transforming enterprise mainframe applications -- and the aging infrastructure that support them.

IT budget planners are using the strident economic environment to force a harder look at alternatives to inflexible and hard-to-manage legacy systems, especially as enterprises seek to cut their total and long-term IT operations spending.

The rationale around reducing total costs is also forcing a recognition of the intrinsic difference between core applications and so-called context -- context being applications that are there for commodity productivity reasons, not for core innovation, customization or differentiation.

With a commodity productivity application, the most effective delivery is on the lowest-cost platform or from a provider. The problem is that 20 or 30 years ago, people put everything on mainframes. They wrote it all in code.

The challenge now is how to free up the applications that are not offering any differentiation -- and do not need to be on a mainframe -- and which could be running on a much more lower cost infrastructure, or come from a completely different means of delivery, such as software as a service (SaaS).

There are demonstrably much less expensive ways of delivering such plain vanilla applications and services, and significant financial rewards for separating the core from the context in legacy enterprise implementations.

This discussion is the third and final in a series that examines "Application Transformation: Getting to the Bottom Line." The series coincides with a trio of Hewlett-Packard (HP) virtual conferences on the same subject.
Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.
Helping to examine how alternatives to mainframe computing can work, we're joined by John Pickett, worldwide mainframe modernization program manager at HP; Les Wilson, America's mainframe modernization director at HP, and Paul Evans, worldwide marketing lead on applications transformation at HP. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Evans: We have seen organizations doing a lot with their infrastructure, consolidating it, virtualizing it, all the right things. At the same time, a lot of CIOs or IT directors know that the legacy applications environment has been somewhat ignored.

Now, with the pressure on cost, people are saying, "We've got to do something, but what can come out of that and what is coming out of that?" People are looking at this and saying, "We need to accomplish two things. We need a longer term strategy. We need an operational plan that fits into that, supported by our annual budget."

Foremost is this desire to get away from this ridiculous backlog of application changes, to get more agility into the system, and to get these core applications, which are the ones that provide the differentiation and the innovation for organizations, able to communicate with a far more mobile workforce.

What people have to look at is where we're going strategically with our technology and our business alignment. At the same time, how can we have a short-term plan that starts delivering on some of the real benefits that people can get out there?

... These things have got to pay for themselves. An analyst recently looked me in the face and said, "People want to get off the mainframe. They understand now that the costs associated with it are just not supportable and are not necessary."

One of the sessions from our virtual conference features Geoffrey Moore, where he talks about this whole difference between core applications and context -- context being applications that are there for productivity reasons, not for innovation or differentiation.

Pickett: It's not really just about the overall cost, but it's also about agility, and being able to leverage the existing skills as well.

One of the case studies that I like is from the National Agricultural Cooperative Federation (NACF). It's a mouthful, but take a look at the number of banks that the NACF has. It has 5,500 branches and regional offices, so essentially it's one of the largest banks in Korea.

One of the items that they were struggling with was how to overcome some of the technology and performance limitations of the platform that they had. Certainly, in the banking environment, high availability and making sure that the applications and the services are running were absolutely key.

At the same time, they also knew that the path to the future was going to be through the IT systems that they had and they were managing. What they ended up doing was modernizing their overall environment, essentially moving their core banking structure from their current mainframe environment to a system running HP-UX. It included the customer and account information. They were able to integrate that with the sales and support piece, so they had more of a 360 degree view of the customer.

We talk about reducing costs. In this particular example, they were able to save $40 million on an annual basis. That's nice, and certainly saving that much money is significant, but, at the same time, they were able to improve their system response time two- to three-fold. So, it was a better response for the users.

But, from a business perspective, they were able to reduce their time to market. For developing a new product or service, that they were able to decrease that time from one month to five days.

Makes you more agile

If you are a bank and now you can produce a service much faster than your competition, that certainly makes it a lot easier and makes you a lot more agile. So, the agility is not just for the data center, it's for the business as well.

To take this story just a little bit further, they saw that in addition to the savings I just mentioned, they were able to triple the capacity of the systems in their environment. So, it's not only running faster and being able to have more capacity so you are set for the future, but you are also able to roll out business services a whole lot quicker than you were previously.

... Another example of what we were just talking about is that, if we shift to Europe, Middle East, and Africa region, there is very large insurance company in Spain. It ended up modernizing 14,000 MIPS. Even though the applications had been developed over a number of years and decades, they were able to make the transition in a relatively short length of time. In a three- to six-month time frame they were able to move that forward.

With that, they saw a 2x increase in their batch performance. It's recognized as one of the largest batch re-hosts that are out there. It's just not an HP thing. They worked with Oracle on that as well to be able to drive Oracle 11g within the environment.

Wilson: ... In the virtual conferences, there are also two particular customer case studies worth mentioning. We're seeing a tremendous amount of interest from some of the largest banks in the United States, insurance companies, and benefits management organizations, in particular.

In terms of customer situations, we've always had a very active business working with organizations in manufacturing, retail, and communications. One thing that I've perceived in the last year specifically -- it will come as no surprise to you -- is that financial institutions, and some of the largest ones in the world, are now approaching HP with questions about the commitment they have to their mainframe environments.

We're seeing a tremendous amount of interest from some of the largest banks in the United States, insurance companies, and benefits management organizations, in particular.

Second, maybe benefiting from some of the stimulus funds, a large number of government departments are approaching us as well. We've been very excited by customer interest in financial services and public sector.

The first case study is a project we recently completed at a wood and paper products company, a worldwide concern. In this particular instance we worked with their Americas division on a re-hosting project of applications that are written in the Software AG environment. I hope that many of the listeners will be familiar with the database ADABAS and the language, Natural. These applications were written some years ago, using those Software AG tools.

Demand was lowered

The user company had divested one of the major divisions within the company, and that meant that the demand for mainframe services was dramatically lowered. So, they chose to take the residual applications, the Software AG applications, representing about 300-350 MIPS, and migrate those in their current state, away from the mainframe, to an HP platform.

Many folks listening to this will understand that the Software AG environment can either be transformed and rewritten to run, say, in an Oracle or a Java environment, or we can maintain the customer's investment in the applications and simply migrate the ADABAS and Natural, almost as they are, from the mainframe to an alternative HP infrastructure. The latter is what we did.

By not needing to touch the mainframe code or the business rules, we were able to complete this project in a period of six months, from beginning to end. The user tells us that they are saving over $1 million today in avoiding the large costs associated with mainframe software, as well as maintenance and depreciation on the mainframe environment.

... The more monolithic approach to applications development and maintenance on the mainframe is a model that was probably appropriate in the days of the large conglomerates, where we saw a lot of companies trying to centralize all of that processing in large data centers. This consolidation made a lot of sense, when folks were looking for economies of scale in the mainframe world.

Today, we're seeing customers driving for a higher degree of agility. In fact, my second case study represents that concept in spades. This is a large multinational manufacturing concern. We will just refer to them as "a manufacturing company." They have a large number of businesses in their portfolio.

Our particular customer in this case study is the manufacturer of electronic appliances. One of the driving factors for their mainframe migration was ... to divest themselves from the large mainframe corporate environment, where most of the processing had been done for the last 20 years.

They wanted control of their own destiny to a certain extent, and they also wanted to prepare themselves for potential investment, divestment, and acquisition, just to make sure that they were masters of their own future.

Pickett: ... Just within the past few months, there was a survey by AFCOM, a group that represents data-center workers. It indicated that, over the next two years, 46 percent of the mainframe users said that they're considering replacing one or more of their mainframes.

Now, let that sink in -- 46 percent say they are going to be replacing high-end systems over the next two years. That's an absurdly high number. So, it certainly points to a trend that we are seeing in that particular environment -- not a blip at all.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of regional HP virtual conferences:

Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.

Tuesday, January 12, 2010

Is Google the best candidate to create a good, customer-focused cloud banking service portfolio?

Slowly -- and sometimes not so slowly -- the bricks have been giving way to the clicks for the past 15 years. Plenty of formerly unassailable business models have suffered as a result. The tears flowing for these companies, however, have been few outside their own high, stony walls.

Users, customers, innovators, seekers -- the majority bottom sections of the social and economic pyramids -- these are the big winners in the many wonderful effects of the Web and Internet. And I for one have the freedom, productivity, choice and empowerment to prove it.

Except in one glaring area: banking. We are by no means done on the disruption front.

I have had it with the old financial processes, lack of capability, murky institutions, rips-offs, peonage fees/rates -- and especially attitudes. As far as I'm concerned -- as a consumer, family, and business -- I'm ready to fire them all and move to the inevitable cloud- and open source-based alternatives.

I have had it with credit cards, banks, mutual fund companies, PayPal, debit cards, MasterCard and Visa. As far as I'm concerned they are all fired. They do a lousy job, have suspect security, charge too much, stiff you with hidden fees and raise their rates whenever they want. Why pay 15 percent interest on a credit card when money can be borrowed for less than 2 percent? For their service? For their security? Because they can do a basic two-phase commit?

Merchants hate it, users hate it. Why are we waiting on this? Let the banking disruption rumpus begin!

You want financial industry reform? Screw the Congress, SEC and Fed. Barney Frank and Chris Dodd don't seem to have the stomach and/or power to make much difference. Same with Obama. What we need is real competition -- Internet style. The financial industry needs to follow the mainstream media (and others like car makers and hopefully cell phone networks) on a strict diet of lower costs, less egregious profits, less pitiful service -- and to be swiftly outmatched on their piss-poor online capabilities.

Like a lot of big, old industries, the banking function is essentially a function nowadays of software, standard protocols, high-performance (yet standard) IT systems ... and soon impeccable cloud computing credentials. But they key is the good software, of making things work for the users and community, not just the providers.

A few good transactions

If I can order movies, rent a car, and run a small business online, I should be able to do a few basic financial transactions online. I'd like to do more micro-payments and automated financial and business processes. Credit cards are not the best way to do this. Yet I seem to be stuck with a loan shark when I simply need to be able to order and fulfill a modest online transaction.

So let's have those that are good at what really counts -- software and cloud computing experts -- offering the banking services that we as consumers and businesses really want.

I'm tempted to write a similar screed about health care and mobile telephony, but that will have to wait. But we need to nail banking, finance and insurance first. It impacts all the rest.

The last two years are and should be the last straw. Wake up. In these failed finance industries -- the corporate leaders of which we as U.S. taxpayers apparently own in no small degree -- "Too big to fail" needs to be replaced with too good to resist. The companies that should be subsidized are the ones that create productivity, lower costs, improve service and propel -- rather than hamstring -- the economy.

Why as part of the stimulus are the governments not creating the legislation to allow a new breed of bank to emerge? Why are the laws not being amended to allow for more -- not less! -- competition in the financial realm? What choice do we really have? MasterCard and Visa are not a choice.

In other words, we need a viable new cloud banking option era. Marc Andreessen told Charlie Rose when he set up his latest venture fund last year that new online banking was ripe for investment. He's right. Let's get on with it. I'll be your first customer.

Let the big guy do it


Meanwhile, how about Google? Like a dog on a meat truck, they have their teeth into everything else around them. Why not online banking too? You can't blame for being too big to succeed, can you?

If any of us can explore, learn, compare, shop, order, track, and share your experiences via Google -- the actual monetary transactions scattered inside these processes should be a natural included component too. Right?

Is Google the best candidate to create a good, customer-focused cloud banking service portfolio? I think they would provide just the catalyst for change we so desperately need. We can then expect Microsoft to enter the field three years later, perhaps for an added element of choice and change.

MicroCard and Googlesta! Hey, it's a start, and almost certainly an improvement.

Monday, January 11, 2010

The march of Progress Software: Savvion provides latest entry in BPM consolidation parade

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

Is it more than coincidence that IT acquisitions tend to come in waves? Just weeks after IBM's announcement to snap up Lombardi, Progress Software today responds with an agreement to put Savvion out of its misery? In such a small space that is undergoing active consolidation, it is hard not to know who’s in play.

Nonetheless, Progress’s acquisition confirms that Business Process Management (BPM)’s pure play days are numbered, if you expect executable BPM.

The traditional appeal of BPM was that it was a business stakeholder-friendly approach to developing solutions that didn’t rely on IT programmatic logic. The mythology around BPM pure-plays was that these were business user-, not IT-, driven software buys. [Disclosure: Progress Software is a sponsor of BriefingsDirect podcasts.]

In actuality, they simply used a different language or notation: process models with organizational and workflow-oriented semantics as opposed to programmatic execution language. That stood up only as long as you used BPM to model your processes, not automate them.

Consequently, it is not simply the usual issues of vendor size and viability that are driving IT infrastructure and stack vendors to buy up BPM pure plays. It is that, but more importantly, if you want your BPM tool to become more than documentware or shelfware, you need a solution with a real runtime.

And that means you need IT front and center, and the stack people right behind it. Even with emergence of BPMN 2.0, which adds support for executables, the cold hard facts are that anytime, anything executes in software, IT must be front and center. So much for bypassing IT.

Progress’s $49 million deal, which closes right away, is a great exit strategy for Savvion. The company, although profitable, has grown very slowly over its 15 years. Even assuming the offer was at a 1.5x multiple, Savvion’s extremely low seven-figure business is not exactly something that a large global enterprise could gain confidence in.

Savvion was in a challenging segment: A tiny player contending for enterprise, not departmental, BPM engagements. If you are a large enterprise, would you stake your enterprise BPM strategy on a slow-growing players whose revenues are barely north of $10 million? It wasn’t a question of whether, but when Savvion would be acquired.

[Editor's note: Savvion is bringing a new geographical footprint to Progress. Savvion is well positioned in India, where Progress is eager to tread. And Progress is prominent in Europe, where the Savvion-broadened portfolio will sell better than Savvion could alone. ... Dana Gardner.]

Questions remain

Of course that leads us to the question as to why Progress couldn’t get its hands on Savvion in time to profit from Savvion’s year-end deals. It certainly would have been more accretive to Progress’ bottom line had they completed this deal three months ago (long enough not to disrupt the end of year sales pipeline).

Nonetheless, Savvion adds a key missing piece for Progress’s Apama events processing strategy (you can read Progress/Apama CTO John Bates’s rationale here). There is a symbiotic relationship between event processing and business process execution; you can have events trigger business processes or vice versa.

There is some alignment with the vertical industry templates that both have been developing, especially for financial services and telcos, which are the core bastions (along with logistics) for EP. And with the Sonic service bus, Progress has a pipeline for ferrying events.

[Editor's note: The combination of Savvion BPM and Progress CEP help bring the vision of operational responsiveness, Progress's value theme of late, out from the developer and IT engineer purview (where it will be even stronger) and gets it far closer to the actual business outcomes movers and shakers.

The Savvion buy also nudges Progress closer to an integrated business intelligence (BI) capability. It will be curious to see if Progress builds, buys or partners it's way of adding more BI capabilities into it's fast-expanding value mix. ... Dana Gardner.]

In the long run, there could also be a legacy renewal play by using the Savvion technology to expose functionality for Progress OpenEdge or DataDirect customers, but wisely, that is now a back burner item for Progress which is not the size of IBM or Oracle, and therefore needs to focus its resources.

Acquiring Savvion ups the stakes with Tibco, which also has a similar pairing of technologies in its portfolio.


Although Progress does not call itself a stack player, it is evolving de facto stacks in capital markets, telcos, and logistics.

Event processing, a.k.a., Complex Event Processing (CEP, a forbidding label) or Business Events Processing (a friendlier label that actually doesn't mean much) is still an early adopter market. In essence, this market fulfills a niche where events are not human detectable and require some form of logic to identify and then act upon.

The market itself is not new; capital markets have developed homegrown event processing algorithms for years. What’s new (as in, what’s new in the last decade) is that this market has started to become productized. More recently, SQL-based approaches have emerged to spread high-end event processing to a larger audience.

Acquiring Savvion ups the stakes with TIBCO Software, which also has a similar pairing of technologies in its portfolio. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]

Given ongoing consolidation, that leaves Active Endpoints, Pegasystems, Appian, plus several open source niche pure plays still standing. [Disclsoure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]

Like Savvion, Pega is also an enterprise company, but it is a public company with roughly 10x revenues which as still managed to grow in the 25 percent range in spite of the recession. While in one way, it might make a good fit with SAP (both have their own, entrenched, proprietary languages), Pega is stubbornly independent and SAP acquisition-averse.

Pega might be a fit with one of the emerging platform stack players like EMC or Cisco. On second thought, the latter would be a much more logical target for web-based Appian or fast-growing Active Endpoints, still venture-funded, but also a promising growth player that at some point will get swept up.

[Editor's note: Tony's right. Once these IT acquisition waves begin, they tend to wash over the whole industry as most buyers and sellers match up. As with EAI middleware, BI, and data cleansing technologies before, BPM is the current belle of the ball. ... Dana Gardner.]

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Tuesday, January 5, 2010

Architectural shift joins app logic with massive data sets to take advanced BI analytics to real-time performance heights

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Aster Data Systems.

New architectures for data and logic processing are ushering in a game-changing era of advanced analytics.

These new approaches support massive data sets to produce powerful insights and analysis -- yet with unprecedented price-performance. As we enter 2010, enterprises are including more forms of diverse data into their business intelligence (BI) activities. They're also diversifying the types of analysis that they expect from these investments.

At the same time, more kinds and sizes of companies and government agencies are seeking to deliver ever more data-driven analysis for their employees, partners, users, and citizens. It boils down to giving more communities of participants what they need to excel at whatever they're doing. By putting analytics into the hands of more decision makers, huge productivity wins across entire economies become far more likely.

But such improvements won’t happen if the data can't effectively reach the application's logic, if the systems can't handle the massive processing scale involved, or the total costs and complexity are too high.

In this sponsored podcast discussion we examine how convergence of data and logic, of parallelism and MapReduce -- and of a hunger for precise analysis with a flood of raw new data -- are all setting the stage for powerful advanced analytics outcomes.

To help learn how to attain advanced analytics and to uncover the benefits from these new architectural activities for ubiquitous BI, we're joined by Jim Kobielus, senior analyst at Forrester Research, and Sharmila Mulligan, executive vice president of marketing at Aster Data Systems. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Kobielus: Advanced analytics is focused on how to answer questions about the future. It's what's likely to happen -- forecast, trend, what-if analysis -- as well as what I like to call the deep present, really current streams for complex event processing.

What's streaming in now? And how can you analyze the great gushing streams of information that are emanating from all your applications, your workflows, and from social networks?

Advanced analytics is all about answering future-oriented, proactive, or predictive questions, as well as current streaming, real-time questions about what's going on now. Advanced analytics leverages the same core features that you find in basic analytics -- all the reports, visualizations, and dashboarding -- but then takes it several steps further.

... What Forrester is seeing is that, although the average data warehouse today is in the 1-10 terabyte range for most companies, we foresee the average warehouse size going, in the middle of the coming decade, into the hundreds of terabytes.

In 10 years or so, we think it's possible, and increasingly likely, that petabyte-scale data warehouses or content warehouses will become common. It's all about unstructured information, deep history, and historical information. A lot of trends are pushing enterprises in the direction of big data.

... We need to rethink the platforms with which we're doing analytical processing. Data mining is traditionally thought of as being the core of advanced analytics. Generally, you pull data from various sources into an analytical data mart.

That analytical data mart is usually on a database that's specific to a given predictive modeling project, let's say a customer analytics project. It may be a very fast server with a lot of compute power for a single server, but quite often what we call the analytical data mart is not the highest performance database you have in your company. Usually, that high performance database is your data warehouse.

As you build larger and more complex predictive models you quickly run into resource constraints on your existing data-mining platform. So you have to look for where you can find the CPU power, the data storage, and the I/O bandwidth to scale up your predictive modeling efforts.

... But, [there is] another challenge, which is advanced analytics producing predictive models. Those predictive models increasingly are deployed in-line to transactional applications to provide some basic logic and rules that will drive such important functions as "next best offer" being made to customers based on a broad variety of historical and current information.

How do you inject predictive logic into your transactional applications in a fairly seamless way? You have to think through that, because, right now, quite often analytical data models, predictive models, in many ways are not built for optimal embedding within your transactional applications. You have to think through how to converge all these analytical models with the transactional logic that drives your business.

New data platform

Mulligan: What we see with customers is that the advanced analytics needs and the new generation of analytics that they are trying to do is driving the need for a new data platform.

What you've got is a situation where enterprises want to be able to do more scalable reporting on massive data sets with very, very fast response times. On the reporting side, in terms of the end result to the customer, it is similar to the type of report they are trying to achieve, but the difference is that the quantity of data that they're trying to get at, and the amount of data that these reports are filling up is far greater than what they had before.

That's what's driving a need for a new platform underneath some of the preexisting BI tools that are, in themselves, good at reporting, but what the BI tools need is a data platform beneath them that allows them to do more scalable reporting than you could do before.

... Previously, the choice of a data management platform was based primarily on price-performance, being able to effectively store lots of data, and get very good performance out of those systems. What we're seeing right now is that, although price performance continues to be a critical factor, it's not necessarily the only factor or the primary thing driving their need for a new platform.

What's driving the need now, and one of the most important criteria in the selection process, is the ability of this new platform to be able to support very advanced analytics.

Customers are very precise in terms of the type of analytics that they want to do. So, it's not that a vendor needs to tell them what they are missing. They are very clear on the type of data analysis they want to do, the granularity of data analysis, the volume of data that they want to be able to analyze, and the speed that they expect when they analyze that data.

There is a big shift in the market, where customers have realized that their preexisting platforms are not necessarily suitable for the new generation of analytics that they're trying to do.



They are very clear on what their requirements are, and those requirements are coming from the top. Those new requirements, as it relates to data analysis and advanced analytics, are driving the selection process for a new data management platform.

There is a big shift in the market, where customers have realized that their preexisting platforms are not necessarily suitable for the new generation of analytics that they're trying to do.

We see the push toward analysis that's really more near real-time than what they were able to do before. This is not a trivial thing to do when, it comes to very large data sets, because what you are asking for is the ability to get very, very quick response times and incredibly high performance on terabytes and terabytes of data to be able to get these kind of results in real-time.

Social network analysis

Kobielus: Let's look at what's going to be a true game changer, not just for business, but for the global society. It's a thing called social network analysis.

It's predictive models, fundamentally, but it's predictive models that are applied to analyzing the behaviors of networks of people on the web, on the Internet, Facebook, and Twitter, in your company, and in various social network groupings, to determine classification and clustering of people around common affinities, buying patterns, interests, and so forth.

As social networks weave their way into not just our consumer lives, but our work lives, our life lives, social network analysis -- leveraging all the core advanced analytics of data mining and text analytics -- will take the place of the focus group.

You're going to listen to all their tweets and their Facebook updates and you're going to look at their interactions online through your portal and your call center. Then, you're going to take all that huge stream of event information -- we're talking about complex event processing (CEP) -- you're going to bring it into your data warehousing grid or cloud.

You're also going to bring historical information on those customers and their needs. You're going to apply various social network behavioral analytics models to it to cluster people into the categories that make us all kind of squirm when we hear them, things like yuppie and Generation X and so forth.

They can get a sense of how a product or service is being perceived in real-time, so that the the provider of that product or service can then turn around and tweak that marketing campaign ...



Social network analysis becomes more powerful as you bring more history into it -- last year, two years, five years, 10 years worth of interactions -- to get a sense for how people will likely respond likely to new offers, bundles, packages, campaigns, and programs that are thrown at them through social networks.

If you can push not just the analytic models, but to some degree bring transactional applications, such as workflow, into this environment to be triggered by all of the data being developed or being sifted by these models, that is very powerful.

Mulligan: One of the biggest issues that the preexisting data pipeline faces is that the data lives in a repository that's removed from where the analytics take place. Today, with the existing solutions, you need to move terabytes and terabytes of data through the data pipeline to the analytics application, before you can do your analysis.

There's a fundamental issue here. You can't move boulders and boulders of data to an application. It's too slow, it's too cumbersome, and you're not factoring in all your fresh data in your analysis, because of the latency involved.

One of the biggest shifts is that we need to bring the analytics logic close to the data itself. Having it live in a completely different tier, separate from where the data lives, is problematic. This is not a price-performance issue in itself. It is a massive architectural shift that requires bringing analytics logic to the data itself, so that data is collocated with the analytics itself.

MapReduce plays a critical role in this. It is a very powerful technology for advanced analytics and it brings capabilities like parallelization to an application, which then allows for very high-performance scalability.

What we see in the market these days are terms like "in-database analytics," "applications inside data," and all this is really talking about the same thing. It's the notion of bringing analytics logic to the data itself.

... In the marriage of SQL with MapReduce, the real intent is to bring the power of MapReduce to the enterprise, so that SQL programmers can now use that technology. MapReduce alone does require some sophistication in terms of programming skills to be able to utilize it. You may typically find that skill set in Web 2.0 companies, but often you don’t find developers who can work with that in the enterprise.

What you do find in enterprise organizations is that there are people who are very proficient at SQL. By bringing SQL together with MapReduce what enterprise organizations have is the familiarity of SQL and the ease of using SQL, but with the power of MapReduce analytics underneath that. So, it’s really letting SQL programmers leverage skills they already have, but to be able to use MapReduce for analytics.

... One of the biggest requirements in order to be able to do very advanced analytics on terabyte- and petabyte-level data sets, is to bring the application logic to the data itself. Earlier, I described why you need to do this. You want to eliminate as much data movement as possible, and you want to be able to do this analysis in as near real-time as possible.

What we did in Aster Data 4.0 is just that. We're allowing companies to push their analytics applications inside of Aster’s MPP database, where now you can run your application logic next to the data itself, so they are both collocated in the same system. By doing so, you've eliminated all the data movement. What that gives you is very, very quick and efficient access to data, which is what's required in some of these advanced analytics application examples we talked about.

Pushing the code

What kind of applications can you push down into the system? It can be any app written in Java, C, C++, Perl, Python, .NET. It could be an existing custom application that an organization has written and that they need to be able to scale to work on much larger data sets. That code can be pushed down into the apps database.

It could be a new application that a customer is looking to write to do a level of analysis that they could not do before, like real-time fraud analytics, or very deep customer behavior analysis. If you're trying to deliver these new generations of advanced analytics apps, you would write that application in the programming language of your choice.

Kobielus: In this coming decade, we're going to see predictive logic deployed into all application environments, be they databases, clouds, distributed file systems, CEP environments, business process management (BPM) systems, and the like. Open frameworks will be used and developed under more of a service-oriented architecture (SOA) umbrella, to enable predictive logic that’s built in any tool to be deployed eventually into any production, transaction, or analytic environment.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Aster Data Systems.

You may also be interested in:

Sunday, January 3, 2010

Getting on with 2010 and celebrating ZapThink’s 10-year anniversary

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.

By Ronald Schmelzer

I
t’s hard to believe that ZapThink will be a full decade old in 2010. For those of you that don’t know, ZapThink was founded in October 2000 with a simple mission: record and communicate what was happening at the time with XML standards.

From that humble beginning, ZapThink has emerged as a (still small) advisory and education powerhouse focused on Service-Oriented Architecture (SOA), Cloud Computing, and loosely coupled forms of Enterprise Architecture (EA).

Oh, how things have changed, and how they have not. As is our custom, we’ll use this first ZapFlash of the year to look retrospectively at the past year and the upcoming future. But we’ll also wax a bit nostalgic and poetic as we look at the past 10 years and surmise where this industry might be heading in the next decade.

2009: A year of angst

The Times Square Alliance has it right in celebrating Good Riddance Day just prior to New Year’s Eve. There’s a lot that we can be thankful to put behind us. Anne Thomas Manes started out the year with an angst-filled posting declaring that SOA is Dead. Getting past the misleading headline, many in the industry came to the quick realization that SOA is far from dead, but rather going into a less hyped phase.

And for that reason we’re glad. We say good riddance to vendor hype, consulting firm over-selling, and the general proliferation of misunderstanding that plagued the industry from 2000 until this point (SOA is Web Services? SOA is integration middleware? Buy an ESB get a SOA?). We can now declare that the vendor marketing infatuation with SOA is dead and they have a new target in mind: Cloud Computing.

Last year we predicted that SOA would be pushed from the daily marketing buzz to replaced with Cloud Computing as the latest infatuation of the marketingerati. Specifically we said, “We expect the din of the cloud-related chatter to turn into a real roar by this time next year. Everything SOA-related will probably be turned into something cloud-related by all the big vendors, and companies will desperately try to turn their SOA initiatives into cloud initiatives.”

Oh, boy, were we right ... in spades. Perhaps this wasn’t the most remarkable of predictions, though. Every analyst firm, press writer, and book author was positively foaming at the mouth with Cloud-this and Cloud-that. Of course, if history is a lesson, 90% of what’s being spouted is EAI-cum-SOA-cum-Cloud marketing babble and intellectual nonsense.

But history also teaches us that people have short-term memories and won’t remember. They’ll continue to buy the same software and consulting services warmed over as new tech with only a few enhancements, mostly in the user interface and system integration to change things.

We’ve seen the rapid emergence of a wide range of EA frameworks, SOA methodologies, and disciplines benefiting from a rapid increase in EA and SOA training expenditures.



We also predicted a boom year for SOA education and training, which ended up panning out, for the most part. ZapThink now generates the vast majority of its revenues from SOA training and certification, which has become a multi-million dollar business for us, by itself.

ZapThink is not alone in realizing this boom of EA and SOA training spending. We’ve seen the rapid emergence of a wide range of EA frameworks, SOA methodologies, and disciplines benefiting from a rapid increase in EA and SOA training expenditures. We also predicted that ZapThink would double in size, which hasn’t exactly happened. Instead, we’ve decided to grow through use of partners and contractors – a much wiser move in an economy that has proven to be sluggish throughout 2009.

Yet, not all of our predictions panned out. We promised that there would be one notable failure and one notable success that would be universally and specifically attributed to SOA in 2009, and I can’t say that this has happened. If it did, we’d all know about it.

Rather, we saw the continued recession of SOA into the background as other, more highly hyped and visible initiatives got the thumbs-up of success or the mark of failure. In fact, perhaps this is how it should have been all along. Why should we all know with such grand visibility if it was SOA that succeeded or failed? Indeed, failure or success can rarely be solely attributable to any form of architecture. So, I think it’s possible to say that the prediction itself was misguided. Maybe we should instead have asked for raises for all those involved in SOA projects in 2009.

2010 and beyond: Where are things heading?

It’s easy to have 20/20 hindsight, however. It’s much more difficult to make predictions for the year ahead that aren’t just the obvious no-brainers that anyone who has been observing the market can make. Sure, we can assert that the vendors will continue to consolidate, IT spending will rebound with improving economic conditions, and that cloud computing will continue its inevitable movement through the hype cycle, but that wouldn’t be providing you with any information. Rather, we believe that we can stick our necks out a bit to make some predictions for 2010.

In 2010, we predict that:
  • Open Source SOA infrastructure will dominate – Lack of interest by venture capitalists and consolidation by the Big Five IT infrastructure providers will result in such lack of choice for SOA infrastructure solutions that end users will flock to open source alternatives. As a result, 2010 will be the year that open source SOA infrastructure finally gains enough adoption that it will be on the short list for most large SOA implementations. We’ll see (finally) a robust open source SOA registry/repository offering, SOA management solutions, SOA governance offerings, and SOA infrastructure solutions that rival commercial ones in terms of performance, reliability, and support.


    The Rich Internet Application (RIA) Market wars are over – Put a fork in it, it’s done.



  • The Rich Internet Application (RIA) market wars are over – Put a fork in it, it’s done. Good try Microsoft Silverlight. Nice effort, RIA startups and commercial vendors. Customers have spoken. Adobe Flash and open source Ajax solutions based on Javascript have won. Yes, there will be niches and industries were Silverlight and other commercial solutions might be appropriate and gain traction, but we see way too much (awesome quality) open source jQuery (and Prototype) solutions out there and too much adoption of Flash by the end user base for this trend to go away. And Java on the client? Feggetaboutit – that time has come and gone. As a result, this will be the end of ZapThink’s coverage of the space. Just as we declared the Native XML Database market done in 2002. So too we declare this market contest over.


  • Cloud privacy & security issues put to rest – Already we’re seeing people anguishing about Cloud’s unreliability, insecurity, and lack of privacy. Really? You think people didn’t realize this when they made their Cloud investments in the first place? There’s simply too much economic benefit in running services and applications in a dynamically scalable way on someone else’s infrastructure. The Cloud providers won’t be giving up any time soon. Nor will IT implementers. This means that there will be a credible solution to these problems, and it will become well understood and implemented by year’s end. If you’re looking for a company to start in 2010 that will have a huge, ready customer base and potential for multi-million dollar valuations with an exit in 18-24 months, then this is the place to look. Start a cloud privacy/reliability/security company that addresses current pain points and you’ll win. We’ll just take 5 percent for the suggestion, thanks.

But all these 2010 predictions are still too easy. Since we’ve been around for the past decade, perhaps we should make some predictions about the decade ahead? Where will IT and SOA and EA be in 2020? Most ironically, we believe that not much will really change in the enterprise software landscape. If you were to fall asleep in a Rip van Winklesque fashion today and wake up on January 1, 2020, you’d find that:
  • Mainframes will still exist — Look folks, if they haven’t been subsumed by all the movements of the past 30 years, they won’t be gone in another 10. Mainframes and legacy systems are here to stay. Invest in mainframe-related stocks.


  • We’ll still be talking about Enterprise Architecture – One of the biggest lessons of the past 10 years is that the business still doesn’t understand or value enterprise architecture. CIOs are still, for the most part, business managers who treat IT as a cost center or as a resource they manage on a project-by-project and acquisition-by-acquisition basis. Long-term planning? Put enterprise architects in control of IT strategy? Forget it. In much the same way that the most knowledgeable machinists and assembly line experts would never get into management positions at the automakers, so too will we fail to see EA grab its rightful reins in the enterprise. We’ll still be talking about how necessary, under implemented, and misunderstood EA will be in 2020. You’ll see the same speakers, trainers, and consultants, but with a bit more grey on top (if they don’t already have it now).


    Soon, your most private information will be spread onto hundreds of servers and databases around the world that you can’t control and have no visibility over.



  • More things in IT environments we don’t control – IT is in for long-term downward spending pressure. The technologies and methodologies that are emerging now: Cloud, mobile, Agile, Iterative, Service-Oriented are only pushing more aspects of IT outside the internal environment and into environments that businesses don’t control. Soon, your most private information will be spread onto hundreds of servers and databases around the world that you can’t control and have no visibility over. You can’t fight this battle. Private clouds? Baloney. That’s like trying to stop tectonic shift. The future of IT is outside the enterprise. Deal with it.


  • IT vendors will still be selling 10 years from now what they’ve built (or have) today – There is nothing to indicate that the patterns of vendor marketing and IT purchasing have changed in the past 10 years or will change at all in the next 10 years. Vendors will still peddle their same warmed-over wares as new tech for the next 10 years. And even worse, end users will buy them. IT procurement is still a short-sighted, tactically project-focused, solving yesterday’s problems affair. It would require a huge shift in purchasing and marketing behavior to change this, and I regret that I don’t see that happening by 2020.

The ZapThink take


Some of the above predictions may seem gloomy. Perhaps the current recessionary environment is putting a haze on the positive visions of our crystal ball. More likely, however, is the fact that the enterprise IT industry is in a long-term consolidating phase.

IT is a relatively new innovation for the business having been part of the lexicon and budgets of enterprises probably for 60 years at the longest. Just as the auto industry went through a rapid period of expansion and innovation from the beginning of the past century through the 1960s to later be followed by consolidation and slowing down of innovation, so too will we see the same happen with enterprise IT.

In fact, it’s already begun. Five vendors control over 70 percent of all enterprise IT software and hardware expenditures in the enterprise. Enterprise end users will necessarily need to follow their lead as they do less of their own IT development and innovation in-house.


Now, this doesn’t apply to IT as a whole – we see remarkable advancement and development in IT outside the enterprise. As we’ve discussed many times before, there is a digital divide between the IT environment inside the enterprise and the environment we experience when we’re at home or using consumer-oriented websites, devices, and applications.

We expect that digital divide to continue to separate, and, perhaps within the next 10 years, reach a point where enterprise IT investment will stagnate. Instead, the business will come to depend on outside providers for their technology needs. Wherever that goes, ZapThink has been there for the past 10 years, and we expect to be here another 10. In what shape and what form we will be, that is for you, our customers and readers to determine.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Monday, December 21, 2009

HP's Cloud Assure for Cost Control allows elastic capacity planning to better manage cloud-based services

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Read a full transcript, or download a copy. Sponsor: Hewlett-Packard.

Today's podcast discussion focuses on the economic benefits of cloud computing -- of how to use cloud-computing models and methods to control IT cost by better supporting application workloads.

As we've been looking at cloud computing over the past several years, a long transition is under way, of moving from traditional IT and architectural method to this notion of cloud -- be it private cloud, at a third-party location, or through some combination of the above.

Traditional capacity planning is not enough in these newer cloud-computing environments. Elasticity planning is what’s needed. It’s a natural evolution of capacity planning, but it’s in the cloud.

Therefore traditional capacity planning needs to be reexamined. So now we'll look at how to best right-size cloud-based applications, while matching service delivery resources and demands intelligently, repeatedly, and dynamically. The movement to pay-per-use model also goes a long way to promoting such matched resources and demand, and reduces wasteful application practices.

We'll also examine how quality control for these cloud applications in development reduces the total cost of supporting applications, while allowing for a tuning and an appropriate way of managing applications in the operational cloud scenario.

Here to help unpack how Cloud Assure services can take the mystique out of cloud computing economics and to lay the foundation for cost control through proper cloud capacity management methods, we're joined by Neil Ashizawa, manager of HP's Software-as-a-Service (SaaS) Products and Cloud Solutions. The discussion is moderated by me, BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Ashizawa: Old-fashioned capacity planning focuses on the peak usage of the application, and it had to, because when you were deploying applications in-house, you had to take into consideration that peak usage case. At the end of the day, you had to be provisioned correctly with respect to compute power. Oftentimes with long procurement cycles, you'd have to plan for that.

In the cloud, because you have this idea of elasticity, where you can scale up your compute resources when you need them, and scale them back down, obviously that adds another dimension to old-school capacity planning.

The new way look at it within the cloud is elasticity planning. You have to factor in not only your peak usage case, but your moderate usage case and your low-level usage as well. At the end of the day, if you are going to get the biggest benefit of cloud, you need to understand how you're going to be provisioned during the various demands of your application.

If you were to take, for instance, the old-school capacity-planning ideology to the cloud, you would provision for your peak use-case. You would scale up your elasticity in the cloud and just keep it there.

But if you do it that way, then you're negating one of the big benefits of the cloud. That's this idea of elasticity, and paying for only what you need at that moment.

One of the main factors why people consider sourcing to the cloud is because you have this elastic capability to spin up compute resources when usage is high and scale them back down when the usage is low. You don’t want to negate that benefit of the cloud by keeping your resource footprint at its highest level.
[Editor's Note: On Dec. 16, HP announced three new offerings designed to enable cloud providers and enterprises to securely lower barriers to adoption and accelerate the time-to-benefit of cloud-delivered services.

This same week, Dana Gardner also interviewed HP's Robin Purohit, Vice President and General Manager for HP Software and Solutions, on how CIOs can contain IT costs while spurring innovation payoffs such as cloud architectures.

Also, HP announced, back in the spring of 2009, a Cloud Assure package that focused on security, availability, and performance.]
Making the road smoother

Ashizawa: What we're now bringing to the market works in all three cases [of cloud capacity planning]. Whether you're a private internal cloud, doing a hybrid model between private and public, or sourcing completely to a public cloud, it will work in all three situations.

The new enhancement that we're announcing now is assurance for cost control in the cloud. Oftentimes enterprises do make that step to the cloud, and a big reason is that they want to reap the benefits of the cost promise of the cloud, which is to lower cost. The thing here, though, is that you might fall into a situation where you negate that benefit.

If you deploy an application in the cloud and you find that it’s underperforming, the natural reaction is to spin up more compute resources. It’s a very good reaction, because one of the benefits of the cloud is this ability to spin up or spin down resources very fast. So no more procurement cycles, just do it and in minutes you have more compute resources.

The situation, though, that you may find yourself in is that you may have spun up more resources to try to improve performance, but it might not improve performance. I'll give you a couple of examples.

You can find yourself in a situation where your application is no longer right-sized in the cloud, because you have over-provisioned your compute resources.



If your application is experiencing performance problems because of inefficient Java methods, for example, or slow SQL statements, then more compute resources aren't going to make your application run faster. But, because the cloud allows you to do so very easily, your natural instinct may be to spin up more compute resources to make your application run faster.

When you do that, you find yourself in is a situation where your application is no longer right-sized in the cloud, because you have over provisioned your compute resources. You're paying for more compute resources and you're not getting any return on your investment. When you start paying for more resources without return on your investment, you start to disrupt the whole cost benefit of the cloud.

Applications need to be tuned so that they are right-sized. Once they are tuned and right-sized, then, when you spin up resources, you know you're getting return on your investment, and it’s the right thing to do.

Whether you have existing applications that you are migrating to the cloud, or new applications that you are deploying in the cloud, Cloud Assure for cost control will work in both instances.

Cloud Assure for cost control solution comprises both HP Software and HP Services provided by HP SaaS. The software itself is three products that make up the overall solution.

The first one is our industry-leading Performance Center software, which allows you to drive load in an elastic manner. You can scale up the load to very high demands and scale back load to very low demand, and this is where you get your elasticity planning framework.

Moderate and peak usage

Ashizawa: The second solution from a software’s perspective is HP SiteScope, which allows you to monitor the resource consumption of your application in the cloud. Therefore, you understand when compute resources are spiking or when you have more capacity to drive even more load.

The third software portion is HP Diagnostics, which allows you to measure the performance of your code. You can measure how your methods are performing, how your SQL statements are performing, and if you have memory leakage.

When you have this visibility of end user measurement at various load levels with Performance Center, resource consumption with SiteScope, and code level performance with HP Diagnostics, and you integrate them all into one console, you allow yourself to do true elasticity planning. You can tune your application and right-size it. Once you've right-sized it, you know that when you scale up your resources you're getting return on your investment.

You want to get a grasp of the variable-cost nature of the cloud, and you want to make this variable cost very predictable. Once it’s predictable, then there will be no surprises. You can budget for it and you could also ensure that you are getting the right performance at the right price. ... If you're thinking about sourcing to the cloud and adopting it, from a very strategic standpoint, it would do you good to do your elasticity planning before you go into production or you go live.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Read a full transcript, or download a copy. Sponsor: Hewlett-Packard.

You may also be interested in: