Thursday, April 8, 2010

Private cloud computing nudges enterprises closer to 'IT as a service', process orientation and converged infrastructure

S0-called "private cloud computing" actually consists of many maturing technologies, a variety of architectural approaches, and a slew of IT methodologies, many of which have been in development for 20 years or more.

In many ways, the current popularity of cloud computing models marks an intersection of different elements of IT development and a convergence of infrastructure categories. That makes cloud interesting, relevant, and potentially dramatic in its impact. It also makes cloud complex, in terms of attaining the intended positive results.

Yet private cloud adoption -- which I believe is just as important as "public" cloud sourcing options -- may be challenging to implement successfully at strategic or even multiple tactical level. Cloud concepts will most certainly enter into use in many different ways, and, perhaps, uniquely for each adopting organization. So the question is how private cloud adoption can be approached intelligently, flexibly, and with far higher chance of positive and demonstrable business benefit.

The ideas between private and public cloud are pretty similar. You want to be able to deliver and consume a service quickly over the Internet.



I recently has a chance to discuss the anticipated impact of private cloud models and how enterprises are likely to implement them with two HP executives, Rebecca Lawson, director of Worldwide Cloud Marketing at HP, and Bob Meyer, worldwide virtualization lead in HP's Technology Solutions Group. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP also recently delivered a virtual conference on cloud computing. Our discussion came in the lead-up to that conference.

Here are some excerpts:
Rebecca Lawson: Cloud is a word that's been overused and overhyped and we all know it. One of the reasons it's been so popular is because it has a connotation that any kind of cloud service is one that you can access easily over the Internet, by yourself, self-service, and pay for what you use. That's the standard definition of a cloud service.

The ideas between private and public cloud are pretty similar. You want to be able to deliver and consume a service quickly over the Internet. How they're implemented, of course, is quite different. A typical enterprise IT organization has to support different types of applications and workloads, and in the public cloud, most of the providers are pretty specialized in their requirements.

There are lots of different ways of creating, buying, or utilizing different kinds of technology-enabled services. They might be hosted. They might be cloud services. They might be mainframe-based services. They might be homegrown applications. Step one, when you think about private cloud, is to think about, "What services do I need to deliver, how should I deliver them, and how can I make sure that my consumers can have easy access to them when they need them?"

Bob Meyer: Traditionally, what IT has done is delivered built-to-order services. Somebody from a line of business comes to you and says that they need this specific application. Or, somebody in the test environment says that they need a test bed. As the IT supplier internal to the company, it's your job to get together the storage, the server, the network, the apps, and the data. You do all the plumbing yourself and provide that for that specific service.

In the private cloud or public cloud conversation, you will use an IT provider who will likely be providing a mix of services from this point out -- built-to-order, private cloud, public cloud managed services.

The job is to decide what's best for your organization from that mixed bag of services. Which services are right for which delivery model? Which ones make most sense for the business? So, the built-to-order will become less popular, as cloud becomes more prevalent, we believe, but they will certainly co-exist for quite a while.

Nobody can afford to rip and replace these days.



Lawson: Nobody can afford to rip and replace these days, and we don't think that's really necessary. What's necessary is a shift in how you think about things. Think about all the pools of equipment you have. You've got network stuff, server, storage, people, and processes. They tend to be fairly siloed and pretty complex, because you're supporting so many services and so many apps.

In this day and age, you have to get very direct with what technology-enabled services you provide and why, and what's the most efficient means of doing so. One of the great things about the cloud is that it has allowed the whole universe of service providers to expand and specialize.

Companies that are seizing this opportunity and saying, "We're going to take advantage of technology and use it in a proactive way to help build our organization," are doing so in a very aggressive way right now, because they have more choices and can afford to pick the right service to get a certain outcome out of it.

What you want to achieve

A
lot of it depends on what you want to achieve. If what you're going for is to create an environment where every service IT delivers can be easily consumed by people in the lines of business through a service catalog, there are two ways to approach it. One is from the bottom-up, from your infrastructure, your network, your compute, your storage. You need to set yourself up so your services can be sharable.

That means that instead of having dedicated infrastructure components for each application or service, you pool and converge those elements, so that anytime you want to instantiate a service, you can make it easily provisioned and you can make it sharable. That's the bottom-up approach, which is valid and required.

The top-down approach is to say, "How can we make our services consumable?" That means there's a consumer who's a business person, maybe a salesperson, people in accounting, or what have you. They're your consumers.

They want to be able to come into a menu or a portal and order something, just as they'd order something at Starbucks, where they say, "I want this. Show me what my service levels are. Show me what the options are and what the costs are" Press the button, and it automatically goes out, gets the approval, does the provisioning, and you're ready to go.

The catalog becomes that linchpin. It's almost a conversation device.



You want to be able to do that from the top-down. That's not just the automation of it, but also the cultural shift. IT and people in the lines of business have to come together, sit at a table, and say, "What will be rendered in our service catalog? What are the things that you need to accomplish? Based on that, we're going to offer these services in our catalog."

The catalog becomes that linchpin. It's almost a conversation device. It forces IT and the lines of business to align themselves around a series of services and that becomes it. That's how IT establishes itself as a service provider. What I call the litmus test is having a service catalog that defines what people can use and, by inference, what they can’t be using.

A lot of companies -- and our own company, HP, is an example -- have certain policies about what can and can't be used, based on security, corporate policies, or what have you. An implication of moving in this direction is having the right control and governance around the technology services that get used and by whom they get used. Security around certain data access, identity control, and things like that, all come into play with this.

Meyer: Building a private cloud becomes another way you look at providing the best quality services to the business at the lowest cost.

So, if you look at all the things that your mandated to provide the business, you now have another option that says, "Is this a better way for me to be providing these services to the business? Do I drive out risk? Do I drive out cost? Do I drive up agility?" The more choices you have on the back end, if you take that longer term approach and look at private cloud in that context, it really does help you make smarter decisions and set up a more agile business.

Lawson: The real key there is to think about not so much about whether it's going to cost us or save us money, but rather, wouldn't it be great if you knew that for every service you could say how much money that service helped you make, how much revenue came in the door, or how much money that service helped you save?

Unrealistic metric

In a perfect state, you would know that for every service. Of course, that's unrealistic, but for a vast majority of the services that one offers, there should be a very distinctive value metric set up against that. Usually, that value metric out in the commercial world is that you've paid money for it.

Will you save money by establishing a private cloud? Well, yeah, you should. That should be pretty obvious. There should be some savings, if you're doing it right. If you've gone through a pretty structured process of consolidating, virtualizing, standardizing, and automating, it certainly will.

But, an even the better bang for the buck is saying, "With my portfolio of services, that happen to execute in a shared infrastructure environment, not only it might be really efficient, but I know what the business result of it is."

Meyer: Imagine if all the physical components that the servers and network connections, the storage capacity, even the powering of data center were virtualized in a way that can be treated as a pool of resources that you could carve up on demand and assign to different applications. You could automate it in a way to connect all the moving pieces to make the best use of the capacity you have and do that in a standardized way on top of fewer standardized parts.

That's what we mean by convergence in terms of infrastructure. Going back to the point we talked about before, rather than creating dedicated built-to-order infrastructure for every technology-enabled service, infrastructure is made available from adaptive pools that can be shared by any application, optimized, and managed as a service.

It's a great period of opportunity for companies to really harness the various elements and the various possibilities around technology-enabled services and then put them to work.



To get to that point, we mentioned the virtualization part, not just server virtualization but virtualizing the connections between compute, storage, and network and making sure that they could be connected, reconnected, unconnected, on demand, as the services demand. They have to be resilient. You have to build in the resiliency into that converged infrastructure from disaster recovery to things like nonstop fault tolerance.

Lawson: It's a great period of opportunity for companies to really harness the various elements and the various possibilities around technology-enabled services and then put them to work. We help companies do this in any number of ways. From the process and organizational point of view, we've got a lot of ITIL expertise, COBIT, and all kinds of governance and service management expertise within HP.

We help train organizations and we, of course, have a very large services organization, where we outsource these capabilities to enterprises across the globe. We also have a real robust software portfolio that helps companies automate practically every element of the IT function and systems management, literally from the business value of a service all the way down to the bare-metal.

So, we're able to help companies instrument everything, starting with where the money is coming from, and make sure that everything down the line -- the servers, the storage, the networks, and the information -- are all part of the equation. Of course, we offer companies different ways of consuming all of this.

We have products and services that we sell to our customers. We have ways of helping them get these capabilities through our managed services, through the organization previously known as EDS, which is now called Enterprise Services and licensed products, software-as-a-service (SaaS) products, infrastructure as a service (IaaS), all kinds of stuff.

It really depends on each individual customer. We look at their situation and say, "Where are you today, where do you want to get to, and how can we optimize that experience and help you grow into a more efficient, responsive IT organization?"
You may also be interested in:

Wednesday, April 7, 2010

Well-planned data center transformation effort delivers IT efficiency paybacks, green IT boost for Valero Energy

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

There's a huge drive now for improved enterprise data center performance. Nearly all enterprises are involved nowadays with some level of data-center transformation, either in the planning stages or in outright build-out.

We're seeing many instances where numerous data centers are being consolidated into a powerful core few, as well as completely new, so-called green-field, data centers with modern design and facilities coming online. The heightened activity runs the gamut from retrofitting and designing new data centers to the building and occupying of them.

The latest definition of data center is focused on being what's called fit-for-purpose, of using best practices and assessments of existing assets and correctly projecting future requirements to get that data center just right -- productive, flexible, efficient and well-understood and managed.

Yet these are, by no means, trivial projects. They often involve a tremendous amount of planning and affect IT, facilities, and energy planners. The payoffs are potentially huge, as we'll see, from doing data center design properly -- but the risks are also quite high, if things don't come out as planned.

This podcast examines the lifecycle of data-center design and fulfillment by exploring a successful project at Valero Energy Corp. We're here with two executives from HP and an IT leader at Valero Energy to look at proper planning, data center design and project management.

Please join me in welcoming Cliff Moore, America’s PMO Lead for Critical Facilities Consulting at HP; John Bennett, Worldwide Director of Data Center Transformation Solutions at HP, and John Vann, Vice President of Technical Infrastructure and Operations at Valero Energy Corp. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bennett: If you had spoken four years ago and dared to suggest that energy, power, cooling, facilities, and buildings were going to be a dominant topic with CIOs, you would have been laughed at. Yet, that's definitely the case today, and it goes back to the point about IT being modern and efficient.

Data-center transformation, as we've spoken about before, really is about not only significantly reducing cost to an organization -- not only helping them shift their spending away from management and maintenance and into business projects and priorities -- but also helping them address the rising cost of energy, the rising consumption of energy and the mandate to be green or sustainable.

Data-center transformation tries to take a step back, assess the data center strategy and the infrastructure strategy that's appropriate for a business, and then figure how to get from here to there. How do you go from where you are today to where you need to be?

You have organizations that discover that the data centers they have aren't capable of meeting their future needs. ... All of a sudden, you discover that you're bursting at the themes. ... [You] have to support business growth by addressing both infrastructure strategies, but probably also by addressing facilities. That's where facilities really come into the equation and have become a top-of-mind issue for CIOs and IT executives around the world.

You'll need a strong business case, because you're going to have to justify it financially. You're going to have to justify it as an opportunity cost. You're going to have to justify in terms of the returns on investment (ROIs) expected in the business, if they make choices about how to manage and source funds as well.

Growth modeling

One of the things that's different today than even just 10 years ago is that the power and networking infrastructure available around the world is so phenomenal, there is no need to locate data centers close to corporate headquarters.

You may choose to do it, but you now have the option to locate data centers in places like Iceland, because you might be attracted to the natural heating of their environment. It's a good time [for data center transformation] from the viewpoint of land being cheap, but it might be a good time in terms of business capital.

Moore: The majority of the existing data centers out there today were built 10 to 15 years ago, when power requirements and densities were a lot lower.

People are simply running out of power in their data centers. The facilities today that were built 5, 10, or 15 years ago, just do not support the levels of density in power and cooling that clients are asking for going to the future, specifically for blades and higher levels of virtualization.

Some data centers we see out there use the equivalent of half of a nuclear power plant to run. It's very expensive.

It's also estimated that, at today's energy cost, the cost of running a server from an energy perspective is going to exceed the cost of actually buying the server. We're also finding that many customers have done no growth modeling whatsoever regarding their space, power, and cooling requirements for the next 5, 10, or 15 years -- and that's critical.

When a customer is looking to spend $20 million, $50 million, or sometimes well over a $100 million, on a new facility, you’ve got to make sure that it fits within the strategic plan for the business. That's exactly what boards of directors are looking for, before they will commit to spending that kind of money.

We’ve got to find out first off what they need -- what space, power, and cooling requirements. Then, based on the criticality of their systems and applications, we quickly determine what level of availability is required, as well.

This determines the Uptime Institute Tier Level for the facility. Then, we go about helping the client strategize on exactly what kinds of facilities will meet those needs, while also meeting the needs of the business that come down from the board. ... We help them collaboratively develop that strategy in the next 10 to 15 years for the data center future.

One of the things we do, as part of the strategic plan, is help the client determine the best locations for their data centers based on the efficiency in gathering free cooling, for instance, from the environment.

One of the things that the Valero is accomplishing is the lower energy costs, as a result of building their own data centers with a strategic view.

Vann: Valero is a Fortune 500 company in San Antonio, Texas and we're the largest independent refiner in the North America. We produce fuel and other products from 15 refineries and we have 10 ethanol plants.

We market products in 44 states with large distribution network. We're also into alternative fuel with renewables and one of the largest ethanol producers. We have a wind farm up in northern Texas, around Amarillo, that generates enough power to fuel our McKee refinery.

So what drove us to build? We started looking at building in 2005. Valero grew through acquisitions. Our data center, as Cliff and John have mentioned, was no different than others. We began to run into power,space, and cooling issues.

Even though we were doing a lot of virtualization, we still couldn't keep up with the growth. We looked at remodeling and also expanding, but the disruption and risk to the business was just too great. So, we decided it was best to begin to look for another location.

Our existing data center is on headquarters’ campus which is not the best place for the data center, because it's inside one of our office complexes. Therefore, we have water and other potentially disruptive issues close to the data center -- and it was just concerning considering where the data center is located.

[The existing facility] is about seven years old and had been remodeled once. You have to realize Valero was in a growth mode and acquiring refineries. We now have 15 refineries. We were consolidating quite a bit of equipment and applications back into San Antonio, and we just outgrew it.

We were having hard time keeping it redundant and keeping it cool. It was built with one foot of raised floor and, with all the mechanical inside the data center, we lost square footage.

We began to look for alternative places. We also were really fortunate in the timing of our data center review. HP was just beginning their build of the six big facilities that they ended up building or remodeling, and so we were able to get good HP internal expertise to help us as we were beginning our decision of design and building our data center.

The problem with collocation back in those days of 2006, 2007, and 2008, was that there was a premium for space.



So, we really were fortunate to have experts give us some advice and counsel. We did look at collocation. We also looked at other buildings, and we even looked at building another data center on our campus.

As we did our economics, it was just better for us to be able to build our own facility. We were able to find land northwest of San Antonio, where several data centers have been built. We began our own process of design and build for 20,000 square feet of raised floor and began our consolidation process.

Power and cooling are just becoming an enormous problem and most of this because virtualization blades and other technologies that you put in a data center just run a little hotter and they take up the extra power. It's pretty complex to be able to balance your data center with cooling and power, also UPS, generators, and things like that. It just becomes really complex. So, building a new data center really put us in the forefront.

We had a joint team of HP and the Valero Program Management Office. It went really well the way that was managed. We had design teams. We had people from networking architecture, networking strategy and server and storage, from both HP and Valero, and that went really well. Our construction went well. Fortunately, we didn’t have any bad weather or anything to slow us down; we were right on time and on budget.

Probably the most complex was the migration, and we had special migration plans. We got help from the migration team at HP. That was successful, but it took a lot of extra work.

Probably we'd put more project managers on managing the project, rather than using technical people to manage the project. Technical folks are really good at putting the technology in place, but they really struggle at putting good solid plans in place. But overall, I'd just say that migration is probably the most complex.

Bennett: Modernizing your infrastructure brings energy benefits in its own right, and it enhances the benefits of your virtualization and consolidation activities.

We certainly recommend that people take a look at doing these things. If you do some of these things, while you're doing the data center design and build, it can actually make your migration experience easier. You can host your new systems in the new data center and be moving software and processes, as opposed to having to stage and move servers and storage. It's a great opportunity.

It's a great chance to start off with a clean networking architecture, which also helps both with continuity and availability of services, as well as cost.



It can be a big step forward in terms of standardizing your IT environment, which is recommended by many industry analysts now in terms of preparing for automation or to reduce management and maintenance cost. You can go further and bring in application modernization and rationalization to take a hard look at your apps portfolio. So, you can really get these combined benefits and advantages that come from doing this.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Governance grows more integral to managing cloud computing security risks, says IT practitioner survey

Most enterprises lack three essential ingredients to ensure that sensitive information stored in via cloud computing hosts remains secure: procedures, policies and tools. So says a joint survey called “Information Governance in the Cloud: A Study of IT Practitioners” from Symantec Corp. and Ponemon Institute.

Cloud computing holds a great deal of promise as a tool for providing many essential business services, but our study reveals a disturbing lack of concern for the security of sensitive corporate and personal information as companies rush to join in on the trend,” said Dr. Larry Ponemon, chairman and founder of the Ponemon Institute.

Where is cloud security training?

Despite the ongoing clamor about cloud security and the anticipated growth of cloud computing, a meager 27 percent of those surveyed said their organizations have developed procedures for approving cloud applications that use sensitive or confidential information. Other surprising statistics from the study include:
  • Only 20% of information security teams are regularly involved in the decision-making process

  • Only 25% of information security teams aren’t involved at all

  • Only 30% evaluate cloud computing vendors before deploying their products

  • Only 23% require proof of security compliance

  • A full 75% believe cloud computing migration occurs in a less-than-ideal manner

  • Only 19% provide data security training that discusses cloud applications
Focusing on information governance

IT
vendors and suppliers, including the survey sponsor, Symantec, are lining up to help fill the evident gaps in enterprise cloud security tools, standards, best practices and culture adaptation. Symantec is making several recommendations for beefing up cloud security, beginning with ensuring that policies and procedures clearly state the importance of protecting sensitive information stored in the cloud.

“There needs to be a healthy, open governance discussion around data and what should be placed into the cloud,” says Justin Somaini, Chief Information Security Officer at Symantec. “Data classification standards can help with a discussion that’s wrapped around compliance as well as security impacts. Beyond that, it’s how to facilitate business in the cloud securely. This cuts across all business units.”

Symantec also recommends organizations adopt an information governance approach that includes tools and procedures for classifying information and understanding risk so that policies can be put in place that specify which cloud-based services and applications are appropriate and which are not.

“There’s a lot of push for quick availability of services. You don’t want to go through legacy environments that could take nine months or a year to get an application up and running,” Somaini says. “You want to get it up an running in a month or two to meet the needs and demands of consumers. Working the cloud into IT is very important from a value-add perspective, but it’s also important to make sure we keep an eye on compliance and security issues as well.”

Evaluating and Training Issues

B
eyond governance, there are also cloud security issues around third-parties and employee training that Symantec recommends incorporating into the discussion. Specifically, Symantec promotes evaluating the security posture of third parties before sharing confidential or sensitive information.

Companies should formally train employees how to mitigate the security risks specific to the new technology to make sure sensitive and confidential information is protected prior to deploying cloud technology, said Symantec.

The big question is: Are we getting closer to being able to offer cloud solutions with which enterprises can feel comfortable? Somaini says we’re getting close.

“It's really 'buyer-beware' from a customer perspective. Not all cloud providers are the same. Some work from the beginning in a conscious and deliberate effort to make sure their services are secure. They can provide that confidence in the form of certifications,” Somaini says. “Cloud service providers are going to have to comply and drive security into their solutions and offer that evidence. We’re getting there but we've got some ways to go.”
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Monday, April 5, 2010

Case study shows how HP Data Protector Notebook Extension provides constant backup for mobile workforces

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download the transcript. Sponsor: HP.

Gain more information on HP Data protection Notebook Extension. Follow on Twitter.
Access a Webcast with IDC's Laura DuBois on Avoiding Risk and Improving Productivity on PCs and Laptops.

Data protection has grown significantly more complex in recent years as workers have gravitated to notebook computers and the mobility they enable. The latest BriefingsDirect podcast discussion looks at protecting PC-based data in an increasingly mobile world.

We'll look at a use case -- at Roswell Park Cancer Institute in Buffalo, NY -- for HP Data Protector Notebook Extension (DPNE) software and examine how backup and recovery software has evolved to become more transparent, reliable, and fundamentally user-driven.

Using that continuous back-up principle, the latest notebook and PC backup software captures every saved version of a file, efficiently transfers it all in batches to a central storage location, and then makes it easily and safely accessible for recovery by user from anywhere. That's inside or outside of the corporate firewall.

We'll look at how DPNE slashes IT recovery chores, allows for managed policies and governance to reduce data risks systemically, while also downsizing backups, the use of bandwidth, and storage.

The economies are compelling. The cost of data lost can be more than $400,000 annually for an average-sized business with 5,000 users. Getting a handle on recovery cost, therefore, helps reduce the total cost of operating and supporting mobile PCs, both in terms of operations and in the cost of lost or poorly recovered assets.

To help us better understand the state of the art remote in mobile PC data protection, we're joined by an HP executive and a user of HP DPNE software, Shari Cravens, Product Marketing Manager for HP Data Protection, and a user of DPNE, John Ferguson, Network Systems Specialist at Roswell Park Cancer Institute in Buffalo, NY. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Cravens: We started hearing from our customers a couple of years ago that PC backup was becoming increasingly important in their lives. Part of that's because the workforce is increasingly mobile and flexibility for the workforce is at an all-time high. In fact, we found that 25 percent of staff in some industries operates remotely and that number is growing pretty rapidly.

In fact, in 2008, shipments of laptops overtook desktops for the very first time. What that really means for the end user or for IT staff is that vast amounts of data now live outside the corporate network. We found that the average PC holds about 55,000 files. Of those 55,000, about 4,000 are unique to that user on that PC. And, those files are largely unprotected.

The economics of PC backup are really changing. We're finding that the average data loss incident costs about $2,900, and that's for both IT staff time and lost end user productivity. Take that $2,900 figure and extrapolate that for an average company of about 5,000 PCs. Then, look at hard drive failures alone. There will be about 150 incidents of hard drive failure for that company every year.

If you look at the cost to IT staff to recover that data and the loss in employee productivity, the annual cost to that organization will be over $440,000 a year.



If you look at the cost to IT staff to recover that data and the loss in employee productivity, the annual cost to that organization will be over $440,000 a year. If that data can't be recovered, then the user has to reconstruct it, and that means additional productivity loss for that employee. We also have legal compliance issues to consider now. So if that data is lost, that's an increased risk to the organization.

We all have very sensitive files on our laptops, whether it's competitive information or your personal annual review. One of the things that's been a suggestion in the past was, "Well, we'll just save it to the corporate network." The challenge with that is that people are really concerned about saving these very sensitive files to the corporate network.

What we really need is a solution that's going to encrypt those files, both in transit and at rest, so that people can feel secure that their data is protected.

Historical evolution

The concept behind HP Data Protector Notebook Extension is that we're trying to minimize the risk of that PC data loss, but we're also trying to minimize the burden to IT staff. The solution is to extend some of the robust backup policies from the enterprise to the client environment.

We’re protecting data no matter where the user is -- the home, the coffee shop, the airport.



DPNE does three things. One, it's always protecting data, and it's transparent to the user. It's happening continuously, not on a fixed schedule, so there is no backup window that's popping up.

We’re protecting data no matter where the user is -- the home, the coffee shop, the airport. Whether they are online or offline, their data is being protected, and it's happening immediately. The instant that files are created or changed, data is being protected.

Continuous file protection is number one. Backup policies are centralized and automated by the IT staff. That means that data is always protected, and the IT staff can configure those policies to support their organization's particular data protection goals.

Number two, no matter where they are, users can easily recover their own data. This is a really important point. Getting back to the concept of minimizing the burden to IT staff, DPNE has a simple, single-click menu. Users can recover multiple versions of a file without ever involving IT. They don't ever have to pick up the phone and call the Help Desk. That helps keep IT costs low.

Then, also by optimizing performance, we're eliminating that desire to opt out of your scheduled backup. The process is transparent to the user. It doesn’t impact their day, because DPNE saves and transmits only the changed data. So, the impact to performance is really minimized.

DPNE has a local repository on each client and we established that to store active files. Whether you're connected to the network or not, data is captured and backed up locally to this local repository. This is important for accidental deletions or changes or even managing multiple versions of a file. You're able to go to the menu, click, and restore a file from a previous version at any point in time, without ever having to call IT.

Each client is then assigned to a network repository or data vault inside the network. That holds the backup files that are transferred from the client, and that data vault uses essentially any Windows file share.

The third element is a policy server that allows IT staff to administer the overall system management from just a single web interface, and the centralized administration allows them to do file protection policies and set encryption policies, data vault policies, to their particular specifications.

Finding the cure

Ferguson: Roswell Park Cancer Institute is the oldest cancer research center in the United States. We're focused on understanding, preventing, and eventually finding the cure for cancer. We're located in downtown Buffalo, NY. We have research, scientific, and educational facilities, and we also have a 125-bed hospital here.

Our researchers and scientists are frequently published in major studies, reported globally, for various types of cancers, and with related research studies. A number of breakthroughs in cancer prevention and treatment have been developed here. For example, the PSA test, which is used for detecting prostate cancer, was invented here.

The real challenge is that data is moving around. When you are dealing with researchers and scientists, they work at different schedules than the rest of us. When they are working, they are focused and that might be here, off campus, at home, whatever.

They've got their notebook PCs, their data is with them and they're running around and doing their work and finding their answers. With that data moving around and not always being on the network, the potential for the data loss of something that could be the cure for cancer is something that we take very seriously and very important to deal with

One of the big things was transparency to the user and being simple to use if they do need to use it. We were already in the process of making a decision to replace our existing overall backup solution with HP's Data Protector. So, it was just a natural thing to look at DPNE and it really fits the need terrifically.

There's total transparency to the user. Users don't even have to do anything. They're just going along, doing their work, and everything is going on in the background. And, if they need to use it, it's very intuitive and simple to use.

When people are working on something, they don't think to “save it,” until they're actually done with it. And, DPNE provides us that versioning saving. You can get old versions of documents. You can keep track of them. That's the type of thing that's not really done, but it's really important, and they don't want to lose it.

In terms of the overall Data Protector implementation, we're probably about 40 percent complete. The DPNE implementation will immediately follow that.

A good test run

We anticipate initially just getting our IT staff using the application and giving it a good test run. Then we'll focus on key individuals throughout the organization, researchers, the scientists, the CEO, CIO, the people with all the nice initials after their name, and get them taken care of. We'll get a full roll-out after that.

When it comes to federal regulations, it always is a rising tide, but we've got a good solution that we are now implementing and I think it puts us ahead of the curve.

Cravens: Information is continuing to explode and that's not going to stop. In addition to that, the workforce is only going to get more mobile. This problem definitely isn’t going to go away, and we need solutions that can address the flexibility and mobility of the workforce and be able to manage, as John mentioned, the increase in regulations.

HP Data Protector is very simple to implement. It snaps into your existing infrastructure. You don’t need any specialized hardware. All you need is a Windows machine for the policy server and some disk space for the data vault. You can download a 60-day trial version from hp.com. It's a full-featured version, and you can work with that.

If you have a highly complex multi-site organization, then you might want to employ the services of HP’s Backup and Recovery Fast Track Services for Data Protector. They can help get a more complex solution up and running quickly and reduce the impact on your IT staff just that much sooner.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download the transcript. Sponsor: HP.

Gain more information on HP Data protection Notebook Extension. Follow on Twitter.
Access a Webcast with IDC's Laura DuBois on Avoiding Risk and Improving Productivity on PCs and Laptops.


You may also be interested in:

BriefingsDirect analysts pick winners and losers from cloud computing's economic disruption and impact

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

The latest BriefingsDirect Analyst Insights Edition, Vol. 51, focuses on cloud computing and dollars and cents. Our panel dives into more than the technology, security, and viability issues that have dominated a lot of cloud discussions lately -- and move to the economics and the impact on buyers and sellers of cloud services.

When you ask any one person how cloud will affect their costs, you're bound to get a different answer each time. No one really knows, but the agreement comes when the questions move to, "Will cloud models impact how buyers and providers price their technology? And over the long-term what will buyers come to expect in terms of IT value?"

What comes when we move to a cloud based pay-per value pricing, buying, and budgeting for IT approach? How does the shift to high-volume, low-margin services and/or subscription models affect the IT vendor landscape? How does it affect the pure cloud and software-as-a-service (SaaS) providers, and perhaps most importantly, how do cloud models affect the buy side?

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of charter sponsor, Active Endpoints, maker of the ActiveVOS business process management system.

Join the panel of Dave Linthicum, CTO of Bick Group, a cloud computing and data-center consulting firm; Michael Krigsman, CEO of Asuret and a blogger on ZDNet on IT failures as well as writer of analyst reports for IDC, and Sandy Rogers, an independent industry analyst.

Here are some excerpts:
Linthicum: We've had a tendency to focus on reducing cost over the last few years, with the recession and all, and ultimately cloud computing and SOA are about bringing strategic value back into the business in the form of IT.

I was listening to your podcast with Salesforce.com's Peter Coffee, talking about service oriented architecture (SOA) and cloud computing, and he said something that was very profound.

The fact of the matter is that, if you're looking for cheap IT, we can give you cheap IT. However, you're not going to be able to keep up with the competitive value that IT needs to bring to your enterprise. To get that competitive value, you're going to have to spend additional money.

The ability to align your IT resources to the needs of the business quickly, get into markets fast, delight customers, sell more, and create supply chain integration systems that provide with frictionless commerce is really where the value is in this.

The myth is that cloud computing is always going to be less expensive. I think cloud computing typically is going to be a better, more strategic, more agile architecture, but it's also typically going to be more expensive, at least on the outcome.

We're probably going to have to spend more money initially. That's really what the takeaway is from the initial cloud-computing projects that I am involved in. At the end of the day, it's about strategic use of technology. Ultimately, cost reduction should be part of the result, but in getting there, we're going to have to spend additional dollars.

Rogers: A lot of the enterprises are going to learn from those organizations that have to act at web scale and understand which are the right use-cases to put out there and how to leverage it. ... A lot of the innovation that we see happening on the cloud is really other providers that are starting to build their businesses on the cloud.

They're learning that there is a web-scale business to be obtained out there, and that's really where we are seeing the biggest innovation.



They're learning that there is a web-scale business to be obtained out there. What is also really interesting is that it's more than just technology. It's really transitioning to engage with services and services providers. Those who are attempting to move out there onto the cloud are learning that that is a big piece of the puzzle. Many technology providers have to grow into the role of a service provider.

Krigsman: I ask the question ... Is cheap IT really the goal [of cloud computing]? To me, the real question, the longer-term strategic question, is "How does this new IT infrastructure map onto our business processes and our business requirements looking long-term?" There are some mismatches and mismatched expectations.

When you have one group that is expecting certain types of outcomes and results and you have another group that is capable of delivering results that don’t match the first, namely between buyers and sellers [of cloud services], then the end result is predictable failure or disappointment somewhere down the line.

Linthicum: Cloud computing does require lots of changes. You're going to have to redo your infrastructure, as I write in my book, to leverage newer architectural patterns, such as SOA, and that's typically very expensive to get out and access the services that are available to you on demand, out of the cloud. So that's an expense onto itself.

You're going to have to retrain and re-skill your people within your data center, all the way up into your executive ranks, on what cloud is able to do and how to manage, govern, and secure cloud. You're going to have to pay for the cloud computing providers, which in many instances are going to be less expensive than on-premise systems, but in many other instances are going to be much more costly than on-premise systems.

Companies that think tactically, in quarter to quarter expenses, and consider IT kind of an expense that they rather not have to spend money on are going to fall by the wayside within cloud computing. They're just not going to get it.

It's very much like the Internet was in the mid-'90s. Suddenly, it's a big huge deal, and companies that got on board four or five years ago are leading the market, where companies that suddenly were trying to play catch-up football in 1999, 2000, found that the market left them behind. Many of those companies just went out of business, because they didn’t see the wave coming. Cloud computing is going to be very much like that.

Improvement model

I'm bullish on cloud computing being a catalyst for architectural change and typically for the better. So cloud is not great at security and governance as of yet, but in many instances it's much better than the current security and governance in lots of these existing enterprises, which is poorly defined or nonexistent.

Ultimately, as people revamp their architectures to leverage cloud, moving into SOA, looking at cloud as an architectural option for bit pieces of parts of their data and parts of their processes, they go through an improvement model.

They go through some architectural changes, create new governance models, and create new security models. They leverage identity management versus simple encryption. They learn to be more secure. If they didn't have a chief security officer, they may now have a one, if they are moving into cloud.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

The target systems that are using cloud computing, the target architectures that are leveraging cloud computing, are almost always more secure than the traditional systems from which they came. That doesn't mean they're completely secure and without issues, especially in the cloud computing side.

But people make logical choices about what pieces of information and what processes to run in the cloud and which ones to run on-premise based on security models, and typically, if they are revamping into a new architecture, they are always going to be more secure and better governed, if the architects know what they're doing.

The pay-as-you-go model of cloud computing, even though it can be more expensive in many instances, when you really kind of amortize the cost over many years, is something that's attractive to at least United States IT. It's not always to foreign corporations, but definitely in the United States.

We like the pay-as-you-go cable bill kind of thing that we get, and also the ability to turn the stuff off or move away from it, if we need to, without having a big footprint already in the data center and things we need to deinstall and millions of dollars of hardware that we have to sell on Craigslist if the thing doesn’t work out.

The selling point

That becomes a selling point and really is part and parcel of value of cloud computing. But, it also can be the Achilles' heel of cloud computing, because ultimately people are going to make decisions around financial metrics that may not be realistic. If you look at those financial metrics in light of the requirements of the business, in many instances people are buying cloud computing because of the cost model and not necessarily the strategic value it's going to have to the architecture and therefore have to the business.

Krigsman: Driving toward cloud changes the architecture and requires proper governance. The lack of governance that exists today across the industry is pretty startling. So as organizations move in this direction, there is simply no question that the cultural dimension of getting IT to work more effectively with the business side and so forth must drive with it.

If it doesn't, then, in the end, the solutions that are built with cloud will still have the same set of problems from a business standpoint that current IT solutions have today. This has nothing to do with technology. This is a matter of collaboration and communication across these various information silos.

Rogers: One thing that we're finding from those cloud service providers that had originally targeted the end business customer, is that they're working with the CIOs and the IT departments more. They're working through those issues of security and having backup contingency plans.

It's just a state of education that varying parties within the IT ecosystem have to come on board and understand how to leverage this.



One of the biggest points ... is it's still a mixture of different technologies that have to come together. That’s always been one of the biggest, complex roles that IT needs to serve.

Right now, there are a lot of dependencies on specific technologies internally. A lot of organizations do not want to make those same mistakes with external cloud providers. They're really looking to the IT group as an adviser to guide them and help them in the decisions moving forward.

Krigsman: This is a fundamental point -- the cloud computing winners are going to be those who combine architectural vision and discipline with superior governance and who are also capable of making the adaptive cultural and business transformation changes, such as you were just talking about, things like budgeting, for example. Success in the cloud will require a mixture of all of these things together.

Linthicum: If you are in the IT world today, you need to understand that if you are moving to a new architecture, you have to commit to a certain amount of value that comes back to the business. Typically, it's going to be a five-year horizon in the United States, perhaps a 10-year horizon in the Asia-Pacific. But, that value has to be shown and that has to be returned. If it's not returned, then ultimately it's going to be considered a failure.

Start now

You need to start committing to this stuff right now and putting some skin in the game, and I think a lot of people in these IT organizations are very politically savvy and want to protect their positions. There are a few of them who want to put that skin in the game right now.

I think we are going to see kind of an unfairness in business. People who are starting businesses these days and building it around cloud infrastructures are learning to accept the fact that a lot of their IT is going to reside out on the Internet and the cost effective nature of that. They're going to have a huge strategic advantage over legacy businesses, people who've been around for years and years and years.

There are going to be a lot of traditional companies out there that are going to be looking at these vendors and learning from them.



As they grow and they start to go public and they start to grow as a business, they get up to a half a billion mark, they are going to find that they are able to provide a much more higher cost and price advantage over their competitors and just eat their lunch ultimately.

We're going to see that, not necessarily now, because those guys are typically smaller and just up and coming, but in five years, as they start to grow up, their infrastructure is just going to be much more cost effective and they are just going to run circles around the competition.

... Ultimately, it would be about the ability to leverage technology that's pervasive around the world. What you're going to find is the biggest uptake of any kind of new technological shift is going to be in the United States or the North American marketplaces. We're seeing that in the U.S. right now.

We could find that the cloud computing advantage it has brought to the corporate U.S. infrastructure is going to be significant in the next four years, based on the European enterprises out there and some of the Asia-Pacific enterprises out there that will play catch-up toward the end.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Charter Sponsor: Active Endpoints.

You may also be interested in:

Friday, March 26, 2010

Including startups in your SOA infrastructure: A guide for enterprise architects

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.

By Ronald Schmelzer

In a previous ZapFlash, ZapThink opined that Open Source Software could play an important role in your Service-Oriented Architecture (SOA) Infrastructure. Certainly, there were no architectural reasons why it couldn’t.

As we explained in that article, the primary biases against OSS (if there are any) are from the people in the organization who have fear, uncertainty, or doubt about the risks or benefits of OSS.

But of course, that article spoke at a fairly general level. Individual implementations or products might be better than others, or more suited for specific problems than others. This is where Enterprise Architects should spend their time focused – on the specific solutions to specific problems, rather than engaging in religious battles about the merits of entire classes of solutions.

Unfortunately, in addition to the biases against OSS, many companies have developed aversions to solutions from startup companies. Yet, in an environment where we are left with just a handful of incumbent companies remaining in the SOA infrastructure landscape, and these vendors have confusing collections of often conflicting and competitive infrastructure products, it might be a good time to revisit utilizing solutions from niche, best-of-breed, and often startup, solutions in your SOA environment.

However, how do you do so without incurring substantial real or perceived risk? After all, it is the nature of a startup company to change, be acquired, or die. In this environment, EAs need to become wholeheartedly selfish: meet the requirements of the business in an agile manner by reducing the penalty for failure. In such an environment, startup solutions are not only feasible, but very appropriate.

Best of breed in an increasingly suite world

Through a combination of consolidation, maturation, and the pressures of a tough economic environment, the landscape of enterprise IT software players has dwindled to a handful of companies that control the infrastructure for a vast majority of companies.

Just like the auto industry experienced a period of rapid growth and diversity in the early part of the 20thcentury, only to consolidate down to the “Big three” in the United States and a similar number in countries around the world, we are now faced with the reality of a “Big Five” set of vendors in the enterprise IT marketplace, especially in the area of SOA infrastructure.

Agility is a key benefit of SOA, which means that properly designed architectures should not only be implementation-neutral, they should be fairly immune to infrastructural change.



However, consolidation is not always a friend of innovation. Many have argued that the consolidation of the auto industry in the US by the late 1970s resulted in products that were unable to compete with offerings from overseas.

Indeed, it’s in the period after the consolidation that the US manufacturers saw its most precipitous decline in worldwide share of automobiles. Why is this? Is it because large companies can’t innovate? Or is it that the large portfolio or products and services are confusing not only to customers but even to internal managers?

When one company owns Pontiac, Buick, Oldsmobile, Chevrolet, and a myriad of other brands, how can anyone really tell when one product is best suited for a problem or another? These brands compete for dollars not only among customers, but among their own budgets. Much hay has been made of Microsoft’s internal competition and struggles that have hindered its own ability to compete. Why should it be any different for the enterprise IT software companies that have grown primarily through acquisition?

Innovation is incredibly important in an area of continued maturation such as SOA. More importantly, agility is a key benefit of SOA, which means that properly designed architectures should not only be implementation-neutral, they should be fairly immune to infrastructural change.

In this light, vendor selection is less a matter of making sure your infrastructure works and more a matter of picking the right vendor for the job while balancing risk and economic factors. In this light, startup and niche companies offer just as much opportunity, if not more, to advance your architectural efforts than those of large vendors. The only things that differentiate the startups from the large vendors are three core issues: the scope of their offerings, the potential risk of company failure, and the ability to negotiate price to your benefit.

Mitigating the startup risk:
Enterprise software and cloud/SaaS concerns


The biggest risk that many cite in working with startup companies is the risk that they might simply no longer exist. This fear is especially pronounced for companies that must spend a considerable amount of time and money implementing the solutions.

If an enterprise is involved in a multi-year effort to implement a large-scale, highly visible, and important solution for the company, then in many cases startup solutions are ruled out very early in the vendor evaluation process. This is even if the startup company offers a better, more appropriate, and more innovative solution. The real issue here is whether the risk of company failure, real or perceived, should outweigh the loss of solution appropriateness and innovation. Or in other words, does it make sense for companies to implement less-optimal solutions based on what they know today because they fear an unknown event in the future?

Rather than rule out startup solutions out of hand, companies should mitigate vendor failure by incorporating such contingencies in their enterprise architecture. We would argue such vendor mitigation plans should be made for well-established vendors as well, since internal political or budgetary battles might result in the disappearance of even decades-old products.

Companies should require an escrow provision similar to what is provided by licensed enterprise software vendors.



There are two major areas of mitigation for enterprise IT vendor products: products that companies install, manage, and own in their own infrastructure (traditional enterprise software products sold by the license), and those solutions that are run and managed on the vendor’s infrastructure (such as Cloud or Software-as-a-Service [SaaS] offerings).

In the case of licensed enterprise software, it has long been a practice of end-user companies to require that the vendor’s software code be held in escrow such that if the vendor goes out of business, it is transferred to the ownership of the end-user customer. While this is a far from optimal solution (after all, the company has no knowledge or ability to do much with the code), it provides some level of comfort to the buyers that the code at the very least won’t disappear.

More complicated is a mitigation plan for Cloud/SaaS offerings. If a SaaS vendor disappears, what happens to the code? If a Cloud vendor goes under, what happens to the infrastructure? More importantly, what happens to your data? It’s not enough to simply require that the vendor hand over the code for their SaaS implementation; in the event of their failure, you have to also implement all the infrastructure that makes the Cloud work or keeps the SaaS solutions running.

This is because the economic benefit of Cloud computing and SaaS solutions is that you’re not paying the full cost of owning and managing the solution. It is easy to mitigate the data component of the Cloud/SaaS default risk – simply make sure that you maintain a “local” copy of all relevant data.

However, in order to mitigate the loss of application functionality and infrastructure, a company needs to have a backup plan. Enterprise architects need to discover or implement comparable Services run internally or on another Cloud/SaaS service. Or, companies should require an escrow provision similar to what is provided by licensed enterprise software vendors – if the SaaS / Cloud vendor goes belly up, they have to hand over not only the code and data that makes the application work, but also configured infrastructure on which to run it. While the hope is that these escrow provisions will never have to be enacted, they provide the security blanket necessary to give one at least a psychological sense of security.

Negotiation leverage:
It’s on your side with startups


Mitigation and product functionality issues aside, there is another good reason to work with startup vendors: it’s much easier to get your way with smaller companies hungry for your business. Smaller vendors have less layers of corporate infrastructure, and many times you are in direct communication with the individuals responsible for the functionality of your implementation. In this way, it’s easier to get your voice heard on features or bug fixes. Don’t like the way something works or want a new feature? Pick up the phone and talk directly to the product or development managers, or even the CTO. Perhaps you’ll get a fix the same day or within a very short timeframe. Try that with one of the super-vendors.

Smaller companies are more eager to negotiate, especially if you are a large enterprise that could be a marquee name for them.



It’s also easier to negotiate on price. While large vendors might be able to discount or cut the price on one of their offerings so they can make another one sweeter, the realities of large sales forces and commission structures requires them to keep their products at a certain (increasingly higher) price point. Smaller companies are more eager to negotiate, especially if you are a large enterprise that could be a marquee name for them.

Finally, it’s easier to get help with your specific implementation from startup companies. Many enterprise software startup companies know that their products are not plug-and-play and require some additional effort and expense to set them up. As a result, many startups have professional services arms whose goals are not to drive revenue for the company, but rather to support the products in customer installations.

Unless the startup vendor charges for this additional service (and we regularly counsel them not to), you should consider this to be free consulting and professional services help. Use as much of this as possible, and even negotiate more into your contract. It is in yours and the startup’s best interests to make sure you get the value you require from your investment.

The ZapThink take

As you can see from the past few ZapFlashes, ZapThink is very concerned that the rapid consolidation and maturation of the enterprise IT landscape will have a negative outcome on innovation in the marketplace. We believe that the consolidation is resulting in mammoth conglomerates of vendors that will be harder, more confusing, and more expensive to work with. We believe that there is just as much uncertainty around the future of the large vendor’s offerings as there are with startup offerings. In this light, we don’t believe that there’s anything more inherently risky about a startup solution than an established, incumbent vendor solution.

The only thing that has us concerned about the startup landscape is the shortage of new startups. We’ve seen a significant drop-off in new enterprise software venture creation. We are not entirely sure why this is. Is there simply less demand for new enterprise software solutions? Is there less opportunity for new enterprise software startups?

Has the venture capital and finance community lost interest with enterprise software? Or has the area of innovation moved away from enterprise software? We hope none of these things are true. The enterprise still has leagues to go to get closer to the vision of loosely-coupled, agile, heterogeneous systems that can meet the ever-changing needs of business with high governance and low risk. There’s plenty of opportunity here. Startups: do your part innovating in this space. Enterprises: do your part and implement startup companies’ offerings so that innovation does not come screeching to a halt.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.


SPECIAL PARTNER OFFER


SOA and EA Training, Certification,

and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

You may also be interested in:

Sunday, March 21, 2010

Essential reading on impact of Web and media shift on thinking, socializing, publishing

From today's NYT:
Instead of reading an entire news article, watching an entire television show or listening to an entire speech, growing numbers of people are happy to jump to the summary, the video clip, the sound bite — never mind if context and nuance are lost in the process; never mind if it’s our emotions, more than our sense of reason, that are engaged; never mind if statements haven’t been properly vetted and sourced.
A lot more goodies where this came from. This may be one of the most important topics and issues of our era.

Tuesday, March 16, 2010

Pegasystems doubles-down on winning streak with Chordiant buy

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

We’d be the first to admit our surprise that Pegasystems has thrived as well as it has. Our initial impression of the company about four to five years ago was of an interesting, rather eccentric bunch whose absent-minded professors had great ideas but little business savvy. At the time, the company was marginally profitable

Maybe their professors weren’t that absent-minded and their approach not so pedantic after all, as the company has been on a winning streak for the past 10 quarters, scoring 25 percent growth last year as the rest of the economy (and software industry) tanked.

Tilting against windmills, the company scored big gains among established clients across financial services industries, who used Pega’s process “solution frameworks” covering areas such as loan origination and underwriting, wholesale banking, and retail bank account opening

Pegasystems is on the right side of history, having embraced vertical frameworks. That’s an approach that you also find IBM taking. In business for roughly 25 years, Pega’s sales didn’t take off until it began rolling out a series of templates or frameworks that provided a 60 percent solution, eliminating the need to model commodity processes from scratch.

Either way, Pega’s success belies our observation that vertical templates are the future of enterprise applications — using the framework as a raw template, they will be composed from existing applications and data sources rather than written or implemented as a packaged application from scratch.

Growth last year added $35 million to the company’s cash cushion, leaving it with a nice healthy $200 million in the bank. But cash in a consolidating industry is trash when your rivals are either acquiring or getting acquired left and right. As so the question was, What would Pega do with its cash?

We have the answer


W
e now have the answer: Pega announced yesterday its intent to acquire Chordiant, whose specialty is dissecting, analyzing, and optimizing a company’s experiences with its customers. The deal, at $167 million in cash, actually nets out to about $116 million when you factor Cordiant’s $51 million cash position.

Pega’s solicited offer trumped an abortive unsolicited $105 million offer back in January from CDC, an aspiring Hong Kong-based enterprise applications provider. Chordiant has come down a few notches over time, with business flattening to $75 million last year, down from $115 million a couple years ago. Pega’s $5 per share bid is about 10 percent of the company’s 2000 dot com peak, but a 30 percent premium over its current valuation.

Pega got a good deal, and Wall St. agreed, as shares of both companies rose on the heels of the announcement. It reflects the fact that Chordiant provides Pega two opportunities: 1) Deepen its presence in financial services accounts by going into the front office, and 2) gain a new beachhead in telecom where it currently has bit a single critical mass client. Although telco could broaden Pega’s addressable market the deal wouldn’t work if the solutions weren’t complementary.

Pegasystems offers a highly sophisticated, rules-driven approach to defining, modeling, and executing business processes. It offers roughly 30 industry specific templates, and well over a dozen cross-industry frameworks such as customer process management, control and compliance, procurement and so on.

On paper, it looks like yin and yan. But there are basic architectural differences between the products.

By contrast Chordiant covers what it calls “customer experience management,” which tracks customer interactions and offers predictive analytics for optimizing cross-selling, upselling, or customer retention strategies, or for predicting risk or churn. It also offers vertical templates for financial services, healthcare, and telecom. Chordiant’s predictive analytics have adaptive capabilities where the rules can change based on trends in customer response; if a promotion offer proves not as attractive as initially forecast, the rules can adjust the algorithm to reflect reality

The potential synergy is where Chordiant optimizes customer-facing front office processes while Pega’s BPM frameworks optimize the corresponding back office processes such as loan origination.

On paper, it looks like yin and yan. But there are basic architectural differences between the products, as decision management consultant and author James Taylor has pointed out. Keep in mind that Taylor has traditionally been skeptical of Pega’s approach to embedding rules inside its process engine, rather than loosely coupling the two.

But he makes valid points that Chordiant handles rules differently from Pega, that the potential synergy between the two is great, but that the company need to take care that technical differences do not “derail the technical integration or cause the merged company to merge its operations without merging its products.”

So on paper, Pega has made a sound deal. As the company is not yet experienced in digesting acquisitions of this size, its success in consummating the Chordiant acquisition will become a predictive indicator of the company’s ability to survive and grow in a consolidating market where it will be expected to make more such deals.
This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.
You may also be interested in: