Monday, January 11, 2010

The march of Progress Software: Savvion provides latest entry in BPM consolidation parade

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

Is it more than coincidence that IT acquisitions tend to come in waves? Just weeks after IBM's announcement to snap up Lombardi, Progress Software today responds with an agreement to put Savvion out of its misery? In such a small space that is undergoing active consolidation, it is hard not to know who’s in play.

Nonetheless, Progress’s acquisition confirms that Business Process Management (BPM)’s pure play days are numbered, if you expect executable BPM.

The traditional appeal of BPM was that it was a business stakeholder-friendly approach to developing solutions that didn’t rely on IT programmatic logic. The mythology around BPM pure-plays was that these were business user-, not IT-, driven software buys. [Disclosure: Progress Software is a sponsor of BriefingsDirect podcasts.]

In actuality, they simply used a different language or notation: process models with organizational and workflow-oriented semantics as opposed to programmatic execution language. That stood up only as long as you used BPM to model your processes, not automate them.

Consequently, it is not simply the usual issues of vendor size and viability that are driving IT infrastructure and stack vendors to buy up BPM pure plays. It is that, but more importantly, if you want your BPM tool to become more than documentware or shelfware, you need a solution with a real runtime.

And that means you need IT front and center, and the stack people right behind it. Even with emergence of BPMN 2.0, which adds support for executables, the cold hard facts are that anytime, anything executes in software, IT must be front and center. So much for bypassing IT.

Progress’s $49 million deal, which closes right away, is a great exit strategy for Savvion. The company, although profitable, has grown very slowly over its 15 years. Even assuming the offer was at a 1.5x multiple, Savvion’s extremely low seven-figure business is not exactly something that a large global enterprise could gain confidence in.

Savvion was in a challenging segment: A tiny player contending for enterprise, not departmental, BPM engagements. If you are a large enterprise, would you stake your enterprise BPM strategy on a slow-growing players whose revenues are barely north of $10 million? It wasn’t a question of whether, but when Savvion would be acquired.

[Editor's note: Savvion is bringing a new geographical footprint to Progress. Savvion is well positioned in India, where Progress is eager to tread. And Progress is prominent in Europe, where the Savvion-broadened portfolio will sell better than Savvion could alone. ... Dana Gardner.]

Questions remain

Of course that leads us to the question as to why Progress couldn’t get its hands on Savvion in time to profit from Savvion’s year-end deals. It certainly would have been more accretive to Progress’ bottom line had they completed this deal three months ago (long enough not to disrupt the end of year sales pipeline).

Nonetheless, Savvion adds a key missing piece for Progress’s Apama events processing strategy (you can read Progress/Apama CTO John Bates’s rationale here). There is a symbiotic relationship between event processing and business process execution; you can have events trigger business processes or vice versa.

There is some alignment with the vertical industry templates that both have been developing, especially for financial services and telcos, which are the core bastions (along with logistics) for EP. And with the Sonic service bus, Progress has a pipeline for ferrying events.

[Editor's note: The combination of Savvion BPM and Progress CEP help bring the vision of operational responsiveness, Progress's value theme of late, out from the developer and IT engineer purview (where it will be even stronger) and gets it far closer to the actual business outcomes movers and shakers.

The Savvion buy also nudges Progress closer to an integrated business intelligence (BI) capability. It will be curious to see if Progress builds, buys or partners it's way of adding more BI capabilities into it's fast-expanding value mix. ... Dana Gardner.]

In the long run, there could also be a legacy renewal play by using the Savvion technology to expose functionality for Progress OpenEdge or DataDirect customers, but wisely, that is now a back burner item for Progress which is not the size of IBM or Oracle, and therefore needs to focus its resources.

Acquiring Savvion ups the stakes with Tibco, which also has a similar pairing of technologies in its portfolio.


Although Progress does not call itself a stack player, it is evolving de facto stacks in capital markets, telcos, and logistics.

Event processing, a.k.a., Complex Event Processing (CEP, a forbidding label) or Business Events Processing (a friendlier label that actually doesn't mean much) is still an early adopter market. In essence, this market fulfills a niche where events are not human detectable and require some form of logic to identify and then act upon.

The market itself is not new; capital markets have developed homegrown event processing algorithms for years. What’s new (as in, what’s new in the last decade) is that this market has started to become productized. More recently, SQL-based approaches have emerged to spread high-end event processing to a larger audience.

Acquiring Savvion ups the stakes with TIBCO Software, which also has a similar pairing of technologies in its portfolio. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]

Given ongoing consolidation, that leaves Active Endpoints, Pegasystems, Appian, plus several open source niche pure plays still standing. [Disclsoure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]

Like Savvion, Pega is also an enterprise company, but it is a public company with roughly 10x revenues which as still managed to grow in the 25 percent range in spite of the recession. While in one way, it might make a good fit with SAP (both have their own, entrenched, proprietary languages), Pega is stubbornly independent and SAP acquisition-averse.

Pega might be a fit with one of the emerging platform stack players like EMC or Cisco. On second thought, the latter would be a much more logical target for web-based Appian or fast-growing Active Endpoints, still venture-funded, but also a promising growth player that at some point will get swept up.

[Editor's note: Tony's right. Once these IT acquisition waves begin, they tend to wash over the whole industry as most buyers and sellers match up. As with EAI middleware, BI, and data cleansing technologies before, BPM is the current belle of the ball. ... Dana Gardner.]

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Tuesday, January 5, 2010

Architectural shift joins app logic with massive data sets to take advanced BI analytics to real-time performance heights

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Aster Data Systems.

New architectures for data and logic processing are ushering in a game-changing era of advanced analytics.

These new approaches support massive data sets to produce powerful insights and analysis -- yet with unprecedented price-performance. As we enter 2010, enterprises are including more forms of diverse data into their business intelligence (BI) activities. They're also diversifying the types of analysis that they expect from these investments.

At the same time, more kinds and sizes of companies and government agencies are seeking to deliver ever more data-driven analysis for their employees, partners, users, and citizens. It boils down to giving more communities of participants what they need to excel at whatever they're doing. By putting analytics into the hands of more decision makers, huge productivity wins across entire economies become far more likely.

But such improvements won’t happen if the data can't effectively reach the application's logic, if the systems can't handle the massive processing scale involved, or the total costs and complexity are too high.

In this sponsored podcast discussion we examine how convergence of data and logic, of parallelism and MapReduce -- and of a hunger for precise analysis with a flood of raw new data -- are all setting the stage for powerful advanced analytics outcomes.

To help learn how to attain advanced analytics and to uncover the benefits from these new architectural activities for ubiquitous BI, we're joined by Jim Kobielus, senior analyst at Forrester Research, and Sharmila Mulligan, executive vice president of marketing at Aster Data Systems. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Kobielus: Advanced analytics is focused on how to answer questions about the future. It's what's likely to happen -- forecast, trend, what-if analysis -- as well as what I like to call the deep present, really current streams for complex event processing.

What's streaming in now? And how can you analyze the great gushing streams of information that are emanating from all your applications, your workflows, and from social networks?

Advanced analytics is all about answering future-oriented, proactive, or predictive questions, as well as current streaming, real-time questions about what's going on now. Advanced analytics leverages the same core features that you find in basic analytics -- all the reports, visualizations, and dashboarding -- but then takes it several steps further.

... What Forrester is seeing is that, although the average data warehouse today is in the 1-10 terabyte range for most companies, we foresee the average warehouse size going, in the middle of the coming decade, into the hundreds of terabytes.

In 10 years or so, we think it's possible, and increasingly likely, that petabyte-scale data warehouses or content warehouses will become common. It's all about unstructured information, deep history, and historical information. A lot of trends are pushing enterprises in the direction of big data.

... We need to rethink the platforms with which we're doing analytical processing. Data mining is traditionally thought of as being the core of advanced analytics. Generally, you pull data from various sources into an analytical data mart.

That analytical data mart is usually on a database that's specific to a given predictive modeling project, let's say a customer analytics project. It may be a very fast server with a lot of compute power for a single server, but quite often what we call the analytical data mart is not the highest performance database you have in your company. Usually, that high performance database is your data warehouse.

As you build larger and more complex predictive models you quickly run into resource constraints on your existing data-mining platform. So you have to look for where you can find the CPU power, the data storage, and the I/O bandwidth to scale up your predictive modeling efforts.

... But, [there is] another challenge, which is advanced analytics producing predictive models. Those predictive models increasingly are deployed in-line to transactional applications to provide some basic logic and rules that will drive such important functions as "next best offer" being made to customers based on a broad variety of historical and current information.

How do you inject predictive logic into your transactional applications in a fairly seamless way? You have to think through that, because, right now, quite often analytical data models, predictive models, in many ways are not built for optimal embedding within your transactional applications. You have to think through how to converge all these analytical models with the transactional logic that drives your business.

New data platform

Mulligan: What we see with customers is that the advanced analytics needs and the new generation of analytics that they are trying to do is driving the need for a new data platform.

What you've got is a situation where enterprises want to be able to do more scalable reporting on massive data sets with very, very fast response times. On the reporting side, in terms of the end result to the customer, it is similar to the type of report they are trying to achieve, but the difference is that the quantity of data that they're trying to get at, and the amount of data that these reports are filling up is far greater than what they had before.

That's what's driving a need for a new platform underneath some of the preexisting BI tools that are, in themselves, good at reporting, but what the BI tools need is a data platform beneath them that allows them to do more scalable reporting than you could do before.

... Previously, the choice of a data management platform was based primarily on price-performance, being able to effectively store lots of data, and get very good performance out of those systems. What we're seeing right now is that, although price performance continues to be a critical factor, it's not necessarily the only factor or the primary thing driving their need for a new platform.

What's driving the need now, and one of the most important criteria in the selection process, is the ability of this new platform to be able to support very advanced analytics.

Customers are very precise in terms of the type of analytics that they want to do. So, it's not that a vendor needs to tell them what they are missing. They are very clear on the type of data analysis they want to do, the granularity of data analysis, the volume of data that they want to be able to analyze, and the speed that they expect when they analyze that data.

There is a big shift in the market, where customers have realized that their preexisting platforms are not necessarily suitable for the new generation of analytics that they're trying to do.



They are very clear on what their requirements are, and those requirements are coming from the top. Those new requirements, as it relates to data analysis and advanced analytics, are driving the selection process for a new data management platform.

There is a big shift in the market, where customers have realized that their preexisting platforms are not necessarily suitable for the new generation of analytics that they're trying to do.

We see the push toward analysis that's really more near real-time than what they were able to do before. This is not a trivial thing to do when, it comes to very large data sets, because what you are asking for is the ability to get very, very quick response times and incredibly high performance on terabytes and terabytes of data to be able to get these kind of results in real-time.

Social network analysis

Kobielus: Let's look at what's going to be a true game changer, not just for business, but for the global society. It's a thing called social network analysis.

It's predictive models, fundamentally, but it's predictive models that are applied to analyzing the behaviors of networks of people on the web, on the Internet, Facebook, and Twitter, in your company, and in various social network groupings, to determine classification and clustering of people around common affinities, buying patterns, interests, and so forth.

As social networks weave their way into not just our consumer lives, but our work lives, our life lives, social network analysis -- leveraging all the core advanced analytics of data mining and text analytics -- will take the place of the focus group.

You're going to listen to all their tweets and their Facebook updates and you're going to look at their interactions online through your portal and your call center. Then, you're going to take all that huge stream of event information -- we're talking about complex event processing (CEP) -- you're going to bring it into your data warehousing grid or cloud.

You're also going to bring historical information on those customers and their needs. You're going to apply various social network behavioral analytics models to it to cluster people into the categories that make us all kind of squirm when we hear them, things like yuppie and Generation X and so forth.

They can get a sense of how a product or service is being perceived in real-time, so that the the provider of that product or service can then turn around and tweak that marketing campaign ...



Social network analysis becomes more powerful as you bring more history into it -- last year, two years, five years, 10 years worth of interactions -- to get a sense for how people will likely respond likely to new offers, bundles, packages, campaigns, and programs that are thrown at them through social networks.

If you can push not just the analytic models, but to some degree bring transactional applications, such as workflow, into this environment to be triggered by all of the data being developed or being sifted by these models, that is very powerful.

Mulligan: One of the biggest issues that the preexisting data pipeline faces is that the data lives in a repository that's removed from where the analytics take place. Today, with the existing solutions, you need to move terabytes and terabytes of data through the data pipeline to the analytics application, before you can do your analysis.

There's a fundamental issue here. You can't move boulders and boulders of data to an application. It's too slow, it's too cumbersome, and you're not factoring in all your fresh data in your analysis, because of the latency involved.

One of the biggest shifts is that we need to bring the analytics logic close to the data itself. Having it live in a completely different tier, separate from where the data lives, is problematic. This is not a price-performance issue in itself. It is a massive architectural shift that requires bringing analytics logic to the data itself, so that data is collocated with the analytics itself.

MapReduce plays a critical role in this. It is a very powerful technology for advanced analytics and it brings capabilities like parallelization to an application, which then allows for very high-performance scalability.

What we see in the market these days are terms like "in-database analytics," "applications inside data," and all this is really talking about the same thing. It's the notion of bringing analytics logic to the data itself.

... In the marriage of SQL with MapReduce, the real intent is to bring the power of MapReduce to the enterprise, so that SQL programmers can now use that technology. MapReduce alone does require some sophistication in terms of programming skills to be able to utilize it. You may typically find that skill set in Web 2.0 companies, but often you don’t find developers who can work with that in the enterprise.

What you do find in enterprise organizations is that there are people who are very proficient at SQL. By bringing SQL together with MapReduce what enterprise organizations have is the familiarity of SQL and the ease of using SQL, but with the power of MapReduce analytics underneath that. So, it’s really letting SQL programmers leverage skills they already have, but to be able to use MapReduce for analytics.

... One of the biggest requirements in order to be able to do very advanced analytics on terabyte- and petabyte-level data sets, is to bring the application logic to the data itself. Earlier, I described why you need to do this. You want to eliminate as much data movement as possible, and you want to be able to do this analysis in as near real-time as possible.

What we did in Aster Data 4.0 is just that. We're allowing companies to push their analytics applications inside of Aster’s MPP database, where now you can run your application logic next to the data itself, so they are both collocated in the same system. By doing so, you've eliminated all the data movement. What that gives you is very, very quick and efficient access to data, which is what's required in some of these advanced analytics application examples we talked about.

Pushing the code

What kind of applications can you push down into the system? It can be any app written in Java, C, C++, Perl, Python, .NET. It could be an existing custom application that an organization has written and that they need to be able to scale to work on much larger data sets. That code can be pushed down into the apps database.

It could be a new application that a customer is looking to write to do a level of analysis that they could not do before, like real-time fraud analytics, or very deep customer behavior analysis. If you're trying to deliver these new generations of advanced analytics apps, you would write that application in the programming language of your choice.

Kobielus: In this coming decade, we're going to see predictive logic deployed into all application environments, be they databases, clouds, distributed file systems, CEP environments, business process management (BPM) systems, and the like. Open frameworks will be used and developed under more of a service-oriented architecture (SOA) umbrella, to enable predictive logic that’s built in any tool to be deployed eventually into any production, transaction, or analytic environment.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Aster Data Systems.

You may also be interested in:

Sunday, January 3, 2010

Getting on with 2010 and celebrating ZapThink’s 10-year anniversary

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.

By Ronald Schmelzer

I
t’s hard to believe that ZapThink will be a full decade old in 2010. For those of you that don’t know, ZapThink was founded in October 2000 with a simple mission: record and communicate what was happening at the time with XML standards.

From that humble beginning, ZapThink has emerged as a (still small) advisory and education powerhouse focused on Service-Oriented Architecture (SOA), Cloud Computing, and loosely coupled forms of Enterprise Architecture (EA).

Oh, how things have changed, and how they have not. As is our custom, we’ll use this first ZapFlash of the year to look retrospectively at the past year and the upcoming future. But we’ll also wax a bit nostalgic and poetic as we look at the past 10 years and surmise where this industry might be heading in the next decade.

2009: A year of angst

The Times Square Alliance has it right in celebrating Good Riddance Day just prior to New Year’s Eve. There’s a lot that we can be thankful to put behind us. Anne Thomas Manes started out the year with an angst-filled posting declaring that SOA is Dead. Getting past the misleading headline, many in the industry came to the quick realization that SOA is far from dead, but rather going into a less hyped phase.

And for that reason we’re glad. We say good riddance to vendor hype, consulting firm over-selling, and the general proliferation of misunderstanding that plagued the industry from 2000 until this point (SOA is Web Services? SOA is integration middleware? Buy an ESB get a SOA?). We can now declare that the vendor marketing infatuation with SOA is dead and they have a new target in mind: Cloud Computing.

Last year we predicted that SOA would be pushed from the daily marketing buzz to replaced with Cloud Computing as the latest infatuation of the marketingerati. Specifically we said, “We expect the din of the cloud-related chatter to turn into a real roar by this time next year. Everything SOA-related will probably be turned into something cloud-related by all the big vendors, and companies will desperately try to turn their SOA initiatives into cloud initiatives.”

Oh, boy, were we right ... in spades. Perhaps this wasn’t the most remarkable of predictions, though. Every analyst firm, press writer, and book author was positively foaming at the mouth with Cloud-this and Cloud-that. Of course, if history is a lesson, 90% of what’s being spouted is EAI-cum-SOA-cum-Cloud marketing babble and intellectual nonsense.

But history also teaches us that people have short-term memories and won’t remember. They’ll continue to buy the same software and consulting services warmed over as new tech with only a few enhancements, mostly in the user interface and system integration to change things.

We’ve seen the rapid emergence of a wide range of EA frameworks, SOA methodologies, and disciplines benefiting from a rapid increase in EA and SOA training expenditures.



We also predicted a boom year for SOA education and training, which ended up panning out, for the most part. ZapThink now generates the vast majority of its revenues from SOA training and certification, which has become a multi-million dollar business for us, by itself.

ZapThink is not alone in realizing this boom of EA and SOA training spending. We’ve seen the rapid emergence of a wide range of EA frameworks, SOA methodologies, and disciplines benefiting from a rapid increase in EA and SOA training expenditures. We also predicted that ZapThink would double in size, which hasn’t exactly happened. Instead, we’ve decided to grow through use of partners and contractors – a much wiser move in an economy that has proven to be sluggish throughout 2009.

Yet, not all of our predictions panned out. We promised that there would be one notable failure and one notable success that would be universally and specifically attributed to SOA in 2009, and I can’t say that this has happened. If it did, we’d all know about it.

Rather, we saw the continued recession of SOA into the background as other, more highly hyped and visible initiatives got the thumbs-up of success or the mark of failure. In fact, perhaps this is how it should have been all along. Why should we all know with such grand visibility if it was SOA that succeeded or failed? Indeed, failure or success can rarely be solely attributable to any form of architecture. So, I think it’s possible to say that the prediction itself was misguided. Maybe we should instead have asked for raises for all those involved in SOA projects in 2009.

2010 and beyond: Where are things heading?

It’s easy to have 20/20 hindsight, however. It’s much more difficult to make predictions for the year ahead that aren’t just the obvious no-brainers that anyone who has been observing the market can make. Sure, we can assert that the vendors will continue to consolidate, IT spending will rebound with improving economic conditions, and that cloud computing will continue its inevitable movement through the hype cycle, but that wouldn’t be providing you with any information. Rather, we believe that we can stick our necks out a bit to make some predictions for 2010.

In 2010, we predict that:
  • Open Source SOA infrastructure will dominate – Lack of interest by venture capitalists and consolidation by the Big Five IT infrastructure providers will result in such lack of choice for SOA infrastructure solutions that end users will flock to open source alternatives. As a result, 2010 will be the year that open source SOA infrastructure finally gains enough adoption that it will be on the short list for most large SOA implementations. We’ll see (finally) a robust open source SOA registry/repository offering, SOA management solutions, SOA governance offerings, and SOA infrastructure solutions that rival commercial ones in terms of performance, reliability, and support.


    The Rich Internet Application (RIA) Market wars are over – Put a fork in it, it’s done.



  • The Rich Internet Application (RIA) market wars are over – Put a fork in it, it’s done. Good try Microsoft Silverlight. Nice effort, RIA startups and commercial vendors. Customers have spoken. Adobe Flash and open source Ajax solutions based on Javascript have won. Yes, there will be niches and industries were Silverlight and other commercial solutions might be appropriate and gain traction, but we see way too much (awesome quality) open source jQuery (and Prototype) solutions out there and too much adoption of Flash by the end user base for this trend to go away. And Java on the client? Feggetaboutit – that time has come and gone. As a result, this will be the end of ZapThink’s coverage of the space. Just as we declared the Native XML Database market done in 2002. So too we declare this market contest over.


  • Cloud privacy & security issues put to rest – Already we’re seeing people anguishing about Cloud’s unreliability, insecurity, and lack of privacy. Really? You think people didn’t realize this when they made their Cloud investments in the first place? There’s simply too much economic benefit in running services and applications in a dynamically scalable way on someone else’s infrastructure. The Cloud providers won’t be giving up any time soon. Nor will IT implementers. This means that there will be a credible solution to these problems, and it will become well understood and implemented by year’s end. If you’re looking for a company to start in 2010 that will have a huge, ready customer base and potential for multi-million dollar valuations with an exit in 18-24 months, then this is the place to look. Start a cloud privacy/reliability/security company that addresses current pain points and you’ll win. We’ll just take 5 percent for the suggestion, thanks.

But all these 2010 predictions are still too easy. Since we’ve been around for the past decade, perhaps we should make some predictions about the decade ahead? Where will IT and SOA and EA be in 2020? Most ironically, we believe that not much will really change in the enterprise software landscape. If you were to fall asleep in a Rip van Winklesque fashion today and wake up on January 1, 2020, you’d find that:
  • Mainframes will still exist — Look folks, if they haven’t been subsumed by all the movements of the past 30 years, they won’t be gone in another 10. Mainframes and legacy systems are here to stay. Invest in mainframe-related stocks.


  • We’ll still be talking about Enterprise Architecture – One of the biggest lessons of the past 10 years is that the business still doesn’t understand or value enterprise architecture. CIOs are still, for the most part, business managers who treat IT as a cost center or as a resource they manage on a project-by-project and acquisition-by-acquisition basis. Long-term planning? Put enterprise architects in control of IT strategy? Forget it. In much the same way that the most knowledgeable machinists and assembly line experts would never get into management positions at the automakers, so too will we fail to see EA grab its rightful reins in the enterprise. We’ll still be talking about how necessary, under implemented, and misunderstood EA will be in 2020. You’ll see the same speakers, trainers, and consultants, but with a bit more grey on top (if they don’t already have it now).


    Soon, your most private information will be spread onto hundreds of servers and databases around the world that you can’t control and have no visibility over.



  • More things in IT environments we don’t control – IT is in for long-term downward spending pressure. The technologies and methodologies that are emerging now: Cloud, mobile, Agile, Iterative, Service-Oriented are only pushing more aspects of IT outside the internal environment and into environments that businesses don’t control. Soon, your most private information will be spread onto hundreds of servers and databases around the world that you can’t control and have no visibility over. You can’t fight this battle. Private clouds? Baloney. That’s like trying to stop tectonic shift. The future of IT is outside the enterprise. Deal with it.


  • IT vendors will still be selling 10 years from now what they’ve built (or have) today – There is nothing to indicate that the patterns of vendor marketing and IT purchasing have changed in the past 10 years or will change at all in the next 10 years. Vendors will still peddle their same warmed-over wares as new tech for the next 10 years. And even worse, end users will buy them. IT procurement is still a short-sighted, tactically project-focused, solving yesterday’s problems affair. It would require a huge shift in purchasing and marketing behavior to change this, and I regret that I don’t see that happening by 2020.

The ZapThink take


Some of the above predictions may seem gloomy. Perhaps the current recessionary environment is putting a haze on the positive visions of our crystal ball. More likely, however, is the fact that the enterprise IT industry is in a long-term consolidating phase.

IT is a relatively new innovation for the business having been part of the lexicon and budgets of enterprises probably for 60 years at the longest. Just as the auto industry went through a rapid period of expansion and innovation from the beginning of the past century through the 1960s to later be followed by consolidation and slowing down of innovation, so too will we see the same happen with enterprise IT.

In fact, it’s already begun. Five vendors control over 70 percent of all enterprise IT software and hardware expenditures in the enterprise. Enterprise end users will necessarily need to follow their lead as they do less of their own IT development and innovation in-house.


Now, this doesn’t apply to IT as a whole – we see remarkable advancement and development in IT outside the enterprise. As we’ve discussed many times before, there is a digital divide between the IT environment inside the enterprise and the environment we experience when we’re at home or using consumer-oriented websites, devices, and applications.

We expect that digital divide to continue to separate, and, perhaps within the next 10 years, reach a point where enterprise IT investment will stagnate. Instead, the business will come to depend on outside providers for their technology needs. Wherever that goes, ZapThink has been there for the past 10 years, and we expect to be here another 10. In what shape and what form we will be, that is for you, our customers and readers to determine.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Monday, December 21, 2009

HP's Cloud Assure for Cost Control allows elastic capacity planning to better manage cloud-based services

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Read a full transcript, or download a copy. Sponsor: Hewlett-Packard.

Today's podcast discussion focuses on the economic benefits of cloud computing -- of how to use cloud-computing models and methods to control IT cost by better supporting application workloads.

As we've been looking at cloud computing over the past several years, a long transition is under way, of moving from traditional IT and architectural method to this notion of cloud -- be it private cloud, at a third-party location, or through some combination of the above.

Traditional capacity planning is not enough in these newer cloud-computing environments. Elasticity planning is what’s needed. It’s a natural evolution of capacity planning, but it’s in the cloud.

Therefore traditional capacity planning needs to be reexamined. So now we'll look at how to best right-size cloud-based applications, while matching service delivery resources and demands intelligently, repeatedly, and dynamically. The movement to pay-per-use model also goes a long way to promoting such matched resources and demand, and reduces wasteful application practices.

We'll also examine how quality control for these cloud applications in development reduces the total cost of supporting applications, while allowing for a tuning and an appropriate way of managing applications in the operational cloud scenario.

Here to help unpack how Cloud Assure services can take the mystique out of cloud computing economics and to lay the foundation for cost control through proper cloud capacity management methods, we're joined by Neil Ashizawa, manager of HP's Software-as-a-Service (SaaS) Products and Cloud Solutions. The discussion is moderated by me, BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Ashizawa: Old-fashioned capacity planning focuses on the peak usage of the application, and it had to, because when you were deploying applications in-house, you had to take into consideration that peak usage case. At the end of the day, you had to be provisioned correctly with respect to compute power. Oftentimes with long procurement cycles, you'd have to plan for that.

In the cloud, because you have this idea of elasticity, where you can scale up your compute resources when you need them, and scale them back down, obviously that adds another dimension to old-school capacity planning.

The new way look at it within the cloud is elasticity planning. You have to factor in not only your peak usage case, but your moderate usage case and your low-level usage as well. At the end of the day, if you are going to get the biggest benefit of cloud, you need to understand how you're going to be provisioned during the various demands of your application.

If you were to take, for instance, the old-school capacity-planning ideology to the cloud, you would provision for your peak use-case. You would scale up your elasticity in the cloud and just keep it there.

But if you do it that way, then you're negating one of the big benefits of the cloud. That's this idea of elasticity, and paying for only what you need at that moment.

One of the main factors why people consider sourcing to the cloud is because you have this elastic capability to spin up compute resources when usage is high and scale them back down when the usage is low. You don’t want to negate that benefit of the cloud by keeping your resource footprint at its highest level.
[Editor's Note: On Dec. 16, HP announced three new offerings designed to enable cloud providers and enterprises to securely lower barriers to adoption and accelerate the time-to-benefit of cloud-delivered services.

This same week, Dana Gardner also interviewed HP's Robin Purohit, Vice President and General Manager for HP Software and Solutions, on how CIOs can contain IT costs while spurring innovation payoffs such as cloud architectures.

Also, HP announced, back in the spring of 2009, a Cloud Assure package that focused on security, availability, and performance.]
Making the road smoother

Ashizawa: What we're now bringing to the market works in all three cases [of cloud capacity planning]. Whether you're a private internal cloud, doing a hybrid model between private and public, or sourcing completely to a public cloud, it will work in all three situations.

The new enhancement that we're announcing now is assurance for cost control in the cloud. Oftentimes enterprises do make that step to the cloud, and a big reason is that they want to reap the benefits of the cost promise of the cloud, which is to lower cost. The thing here, though, is that you might fall into a situation where you negate that benefit.

If you deploy an application in the cloud and you find that it’s underperforming, the natural reaction is to spin up more compute resources. It’s a very good reaction, because one of the benefits of the cloud is this ability to spin up or spin down resources very fast. So no more procurement cycles, just do it and in minutes you have more compute resources.

The situation, though, that you may find yourself in is that you may have spun up more resources to try to improve performance, but it might not improve performance. I'll give you a couple of examples.

You can find yourself in a situation where your application is no longer right-sized in the cloud, because you have over-provisioned your compute resources.



If your application is experiencing performance problems because of inefficient Java methods, for example, or slow SQL statements, then more compute resources aren't going to make your application run faster. But, because the cloud allows you to do so very easily, your natural instinct may be to spin up more compute resources to make your application run faster.

When you do that, you find yourself in is a situation where your application is no longer right-sized in the cloud, because you have over provisioned your compute resources. You're paying for more compute resources and you're not getting any return on your investment. When you start paying for more resources without return on your investment, you start to disrupt the whole cost benefit of the cloud.

Applications need to be tuned so that they are right-sized. Once they are tuned and right-sized, then, when you spin up resources, you know you're getting return on your investment, and it’s the right thing to do.

Whether you have existing applications that you are migrating to the cloud, or new applications that you are deploying in the cloud, Cloud Assure for cost control will work in both instances.

Cloud Assure for cost control solution comprises both HP Software and HP Services provided by HP SaaS. The software itself is three products that make up the overall solution.

The first one is our industry-leading Performance Center software, which allows you to drive load in an elastic manner. You can scale up the load to very high demands and scale back load to very low demand, and this is where you get your elasticity planning framework.

Moderate and peak usage

Ashizawa: The second solution from a software’s perspective is HP SiteScope, which allows you to monitor the resource consumption of your application in the cloud. Therefore, you understand when compute resources are spiking or when you have more capacity to drive even more load.

The third software portion is HP Diagnostics, which allows you to measure the performance of your code. You can measure how your methods are performing, how your SQL statements are performing, and if you have memory leakage.

When you have this visibility of end user measurement at various load levels with Performance Center, resource consumption with SiteScope, and code level performance with HP Diagnostics, and you integrate them all into one console, you allow yourself to do true elasticity planning. You can tune your application and right-size it. Once you've right-sized it, you know that when you scale up your resources you're getting return on your investment.

You want to get a grasp of the variable-cost nature of the cloud, and you want to make this variable cost very predictable. Once it’s predictable, then there will be no surprises. You can budget for it and you could also ensure that you are getting the right performance at the right price. ... If you're thinking about sourcing to the cloud and adopting it, from a very strategic standpoint, it would do you good to do your elasticity planning before you go into production or you go live.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Read a full transcript, or download a copy. Sponsor: Hewlett-Packard.

You may also be interested in:

Friday, December 18, 2009

Careful advance planning averts costly snafus in data center migration projects

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript of the podcast, or download a copy. Learn more. Sponsor: Hewlett-Packard.

The crucial migration phase when moving or modernizing data centers can make or break the success of these complex undertakings. Much planning and expensive effort goes into building new data centers, or in conducting major improvements to existing ones. But too often there's short shrift in the actual "throwing of the switch" -- in the moving and migrating of existing applications and data.

But, as new data center transformations pick up -- due to the financial pressures to boost overall IT efficiency -- so too should the early-and-often planning and thoughtful execution of the migration itself get proper attention. This podcast examines the best practices, risk mitigation tools, and requirements for conducting data center migrations properly.

To help pave the way to making data center migrations come off effectively, we're joined by three thought leaders from Hewlett-Packard (HP): Peter Gilis, data center transformation architect for HP Technology Services; John Bennett, worldwide director, Data Center Transformation Solutions at HP, and Arnie McKinnis, worldwide product marketing manager for Data Center Modernization at HP Enterprise Services.

The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bennett: We see a great deal of activity in the marketplace right now of people designing and building new data centers. They have this wonderful new showcase site, and they have to move into it.

The reasons for this growth, the reasons for moving to other data centers, are fueled by a lot of different activities.

In many cases it's related to growth. The organization and the business have been growing. The current facilities were inadequate -- because of space or energy capacity reasons or because they were built 30 years ago -- and so the organization decides that it has to either build a new data center or perhaps make use of a hosted data center. As a result, they are going to have to move.

Whether they're moving to a data center they own, moving to a data center owned and managed by someone else, or outsourcing their data center to a vendor like HP, in all cases you have to physically move the assets of the data center from one location to another.

The impact of doing that well is awfully high. If you don't do it well, you're going to impact the services provided by IT to the business. You're very likely, if you don't do it well, to impact your service level agreements (SLAs). And, should you have something really terrible happen, you may very well put your own job at risk.

So, the objective here is not only to take advantage of the new facilities or the new hosted site, but also to do so in a way that ensures the right continuity of business services. That ensures that service levels continue to be met, so that the business, the government, or the organization continues to operate without disruption, while this takes place. You might think of it, as our colleagues in Enterprise Services have put it, as changing the engine in the aircraft while it's flying.

Gilis: If you don't do the planning, if you don't know where you're starting from and where you're going to, then it's like being on the ocean. Going in any direction will lead you anywhere, but it's probably not giving you the path to where you want to go. If you don't know where to go to, then don't start the journey.

Most of the migrations today are not migration of the servers, the assets, but actually migration of the data. You start building a next-generation data center --most of the time with completely new assets that better fit what your company wants to achieve.

Migration is actually the last phase of a data center transformation. The first thing that you do is a discovery, making sure that you know all about the current environment, not only the servers, the storage, and the network, but the applications and how they interact. Based on that, you decide how the new data center should look.

... If you build your new engine, your new data center, and you have all the new equipment inside, the only thing that you need to do is migrate the data. There are a lot of techniques to migrate data online, or at least synchronize current data in the current data centers with the new data center.

Usually, what you find out is that you did not do a good enough job of assessing the current situation, whether that was the assessment of a hardware platform, server platform, or the assessment of a facility.



There's not that much difference between local storage, SAN storage, or network attached storage (NAS) and what you designed. The only thing that you design or architect today is that basically every server or every single machine, virtual or physical, gets connected to a shared storage, and that shared storage should be replicated to a disaster recovery site.

That's basically the way you transfer the data from the current data centers to the new data centers, where you make sure that you build in disaster recovery capabilities from the moment you do the architecture of the new data center. ... The moment you switch off the computer in the first data center, you can immediately switch it on in the new data center.

McKinnis: From an outsourcing perspective, companies don't always do 100 percent outsourcing of that data-center environment or that shared computing environment. It may be part of it. Part of it they keep in-house. Part of it they host with another service provider.

What becomes important is how to manage all the multiple moving parts and the multiple service providers that are going to be involved in that future mode of operation. It's accessing what we currently have, but it's also designing what that future mode needs to look like.

There are all sorts of decisions that go around that from a client perspective to get to that decision. In many cases, if you look at it from a technology standpoint, the point of decision is something around getting to an end of life on a platform or an application. Or, there is a new licensing cycle, either from a support standpoint or an operating system standpoint.

There is usually something that happens from a technology standpoint that says, "Hey look, we've got to make a big decision anyway. Do we want to invest going this way, that we have gone previously, or do we want to try a new direction?"

Once they make that decision, we look at outside providers. It can take anywhere from 12 to 18 months to go through the full cycle of working through all the proposals and all the due diligence to build that trust between the service provider and the client. Then, you get to the point, where you can actually make the decision of, "Yes, this is what we are going to do. This is the contract we are going to put in place." At that point, we start all the plans to get it done.

. . . There are times when deals just fall apart, sometimes in the middle, and they never even get to the contracting phase.



There are lots of moving parts, and these things are usually very large. That's why, even though outsourcing contracts have changed, they are still large, are still multi-year, and there are still lots of moving parts.

Bennett: The elements of trust come in, whether you're building a new data center or outsourcing, because people want to know that, after the event takes place, things will be better. "Better" can be defined as: a lot cheaper, better quality of service, and better meeting the needs of the organization.

This has to be addressed in the same way any other substantial effort is addressed -- in the personal relationships of the CIO and his or her senior staff with the other executives in the organization, and with a business case. You need measurement before and afterward in order to demonstrate success. Of course, good, if not flawless, execution of the data center strategy and transformation are in play here.

The ownership issue may be affected in other ways. In many organizations it's not unusual for individual business units to have ownership of individual assets in the data center. If modernization is at play in the data center strategy, there may be some hand-holding necessary to work with the business units in making that happen. This happens whether you are doing modernization and virtualization in the context of existing data centers or in a migration. By the way, it's not different.

Be aware of where people view their ownership rights and make sure you are working hand-in-hand with them instead of stepping over them. It's not rocket science, but it can be very painful sometimes.

Gilis: You have small migration and huge migrations. The best thing is to cut things into small projects that you can handle easily. As we say, "Cut the elephant in pieces, because otherwise you can't swallow it."

Should be a real partnership

And when you work with your client, it should be a real partnership. If you don't work together, you will never do a good migration, whether it's outsourcing or non-outsourcing. At the end, the new data center must receive all of the assets or all of the data -- and it must work.

If you do a lot of migrations, and that's actually what most of the service companies like HP are doing, we know how to do migrations and how to treat some of the applications migrated as part of a "migration factory."

We actually built something like a migration factory, where teams are doing the same over and over all the time. So, if we have to move Oracle, we know exactly how to do this. If we have to move SAP, we know exactly how to do this.

That's like building a car in a factory. It's the same thing day in and day out, everyday. That's why customers are coming to service providers. Whether you go to an outsourcing or non-outsourcing, you should use a service provider that builds new data centers, transforms data centers, and does migration of data centers nearly every day.

Most of the time, the people that know best how it used to work are the customers. If you don't work with and don't partner directly with the customer, then migration will be very, very difficult. Then, you'll hit the difficult parts that people know will fail, and if they don't inform you, you will have to solve the problem.

McKinnis: Cloud computing has put things back in people's heads around what can be put out there in that shared environment. I don't know that we've quite gotten through the process of whether it should be at a service provider location, my location, or within a very secure location at an outsourced environment.

Where to hold data

I don't think they've gotten to that at the enterprise level. But, they're not quite so convinced about giving users the ability to retain data and do that processing, have that application right there, held within that confinement of that laptop, or whatever it happens to be that they are interacting with. They're starting to see that it potentially should be held someplace else, so that the risk of that data isn't held at the local level.

Bennett: Adopting a cloud strategy for specific business services would let you take advantage of that, but in many of these environments today cloud isn't a practical solution yet for the broad diversity of business services they're providing.

We see that for many customers it's the move from dedicated islands of infrastructure, to a shared infrastructure model, a converged infrastructure, or an adaptive infrastructure. Those are significant steps forward with a great deal of value for them, even without getting all the way to cloud, but cloud is definitely on the horizon.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript of the podcast, or download a copy. Learn more. Sponsor: Hewlett-Packard.

You may also be interested in: