Wednesday, June 2, 2010

What can businesses learn about predictive analytics from American Idol?

This guest post comes courtesy of Rick Kawamura, Director of Marketing at Kapow Technologies.

By Rick Kawamura

Social media data continues to grow at astronomical rates. Last year Twitter grew 1,444 percent with over 50 million tweets sent each day, and Facebook now has over 400 million active users. Every minute, 600 new blog posts are published, 34,000 tweets are sent, and 240,000 pieces of content are shared on Facebook.

The numbers are absolutely astounding. But is social media data credible? And can tangible business intelligence (BI) be extracted from it? [Disclosure: Kapow Technologies is a sponsor of BriefingsDirect podcasts.]

Reality Buzz, a new social media analysis project powered by web data services technology, was created to answer this very question by examining if real-time analysis of social media conversations can predict the outcome of popular reality television shows like American Idol and Dancing with the Stars. After Reality Buzz collected tens of thousands of tweets, comments, and discussions about contestants on both programs each week and applied sentiment analysis to the data, there was very clear, data driven insight to predict the contestants to be eliminated.

Stepping outside the example of "reality" TV, social media sentiment can be a powerful source of data that arms organizations with real-time intelligence to make more strategic business decisions. Based on experience with Reality Buzz, here are five tips for extracting real value from social media data:
Data trumps conventional wisdom

While Malcolm Gladwell, author of Blink: The Power of Thinking Without Thinking, would say otherwise, data-driven business decisions definitely outperform guesswork. Week after week on Dancing with the Stars, the infamous Kate Gosselin held up to 40 percent of all conversations in social media. Unfortunately for Kate, 95 percent of those comments were negative.

Conventional wisdom said that she should pack her bags. Yet the data showed despite all the negative conversations, she still had more share of positive comments than several other contestants, meaning she was far less likely to be eliminated. Because viewers vote for contestants they’d like to keep on the show, there is a strong correlation to positive sentiment. It wasn’t until the fourth week that Kate’s volume of positive comments died down and she was voted off.

Product managers deal with this dilemma all the time. Tasked with determining the next set of product features to drive greater profitability, they have to manage the CEO’s gut feel while also satisfying the needs of those who have to sell it, both of whom want it better, cheaper and faster. But “better, cheaper, faster” isn’t a great long-term strategy. A great product manager would look to the data to find unmet needs and untapped markets, and social media is a great place to find these hidden nuggets of intelligence.

Timing is critical

Any data over 24 hours old is pretty much worthless for predicting who will be eliminated from a reality TV show. The same holds true in the business world, where it’s imperative for the data to be as close to an event as possible, as this data has the strongest effect on sentiment.

Weeks old data may prove costly, resulting in more damage to the brand and revenue.



When launching a new product, for example, companies need to consider sentiment immediately prior to and after the launch. The same applies to a marketing campaign. Say Toyota releases a full page ad in The Wall Street Journal only to get a report on sentiment a few weeks later. Worthless. Companies need to know their customer’s sentiment just before they publish the ad to create the most relevant message, and immediately following to measure its resonance with their audience. Weeks-old data may prove costly, resulting in more damage to the brand and revenue by further demonstrating lack of understanding and responsiveness to frustrated customers.

Don’t be blind to the noise factor

It’s easy to understand trends, changes in momentum, volume of traffic, and ratio of positive to negative sentiment. However there is a lot of noise that can easily skew the data, especially with large, very public shows like American Idol. The bigger the show, product, etc., the more noise. This is most prominent in Twitter, which very often represents the largest source and volume of data. Despite the noise, though, there is valuable information that shouldn’t be ignored. Interestingly, most of the noise resides in neutral sentiment, not positive or negative. These are comments, articles, and reviews about a brand that don’t provide any real opinion.

This is why it’s important to understand how to filter the data to maintain its quality and relevance.

Not all social media sentiment created equal

Companies need to clearly define their goals before analyzing social media data. There are differing degrees of sentiment, and not all translate equally well. Most sentiment analysis tools begin by separating data into positive and negative groups. Yet even within each fan group there are varying degrees of support for contestants. In trying to determine the number of votes for a contestant, consider this data: “I just voted 100 times for Casey” vs. “My top 3 are Lee, Michael and Casey” vs. retweeting a link to a video or article which mentions Casey.

Companies also need to consider how to weigh one tweet versus a Facebook comment versus a blog post.



The reality is that not all data is needed or equal in weight. For American Idol, votes are cast for the person you want to keep on the show, so negative sentiment has little correlation to who will be voted off. This requires factoring out negative comments from total sentiment to get the most accurate prediction. Companies also need to consider how to weigh one tweet versus a Facebook comment versus a blog post. Each is just one piece of data, but does each one count equally?

Don’t look at data in a vacuum

H
aving knowledge of events and circumstances is critical to understanding and extracting intelligence from social media data. In the case of Reality Buzz, it was helpful to watch the performance shows for added context. This process is key for companies to raise other hypotheses to further investigate after they’ve seen the output.

Similarly, some manual data review is also essential to ensure quality and consistency. For example, when using an automated sentiment analysis tool, companies can weigh keywords differently. In addition, automated tools are not yet capable at distinguishing sentiment as functional, emotional or behavioral. So in monitoring social media data, there had to be a huge difference between “I like my new Canon camera” and “I just told my friend to buy the new Canon camera.” While both positive sentiments, the latter should be weighed much more heavily.
The growing mass of social media data is definitely a treasure trove of insight to extract intelligence, whether predicting reality show winners or moving your business forward. When done correctly, collecting and analyzing social media sentiment can be a pain-free, powerful tool for real-time feedback, predictive analytics and getting the competitive edge you need to win.
Rick Kawamura is Director of Marketing at Kapow Technologies, a leading provider of Web data services. Rick was most recently VP of Marketing at DeNA Global, and previously held strategic and product management roles at Palm and Sun Microsystems. He can be reached at rick.kawamura@kapowtech.com.
You may also be interested in:

Tuesday, June 1, 2010

Ariba, IBM deal shows emerging prominence of cloud ecosystem-based collaboration and commerce

The more you delve into how cloud computing can reshape business, the more clear becomes the importance of ecosystems.

No one cloud provider is likely to forecast and deliver all that any business needs or wants. More importantly, the role of the cloud provider is less about providing complete services than in enabling the ease and adaptability of acquiring, delivering and monetizing a variety of services in dynamic combination.

We're now seeing that the marketplace of cloud-hosted APIs is rich and exploding. But it's a self-service, organic market model that's emerging-- not a top-down ERP-like affair. And that is likely to make all the difference in terms of fast adoption.

Do providers like Apple, Google and Amazon produce the lion's share of services themselves -- or do they provide a fertile garden in which others create services and APIs that make the garden most valuable to all participants, inviting more guests, more development, more collaboration?

The organic model is also likely to repeat in ecosystems that allow buyers and sellers to align, and business processes between and among them to flourish. The business-to-business (B2B) commerce cloud is now being built. Recent acquisitions, like IBM's buy of Cast Iron and intent to buy Sterling Commerce, point up the "business garden" goals of Big Blue. Cast Iron allows the cultivation of hybrid clouds, clouds of clouds and rich services integration. Sterling brings EDI-based networks into the fold.

IBM clearly likes the idea of playing match-maker between traditional and new business models.



IBM clearly likes the idea of playing match-maker between traditional and new business models. And this cloud garden party effect aligns perfectly with IBM's tendency to avoid providing packaged business applications in favor of the platforms, middleware, process enablement and collaboration capabilities that support others' discrete applications.

Last week's announcement then of a cloud collaboration partnership between IBM and Ariba furthers the emerging prominence of cloud commerce ecosystems. To encourage more ecommerce, the IBM-Ariba deal matches B2B buyers and sellers via LotusLive collaboration and social networking services, all through cloud delivery models.

Conference capstone

The announcement came as a capstone to the Ariba Live 2010 conference in Orlando. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.] I had fun at the conference spouting off on cloud benefits, and tweeting up some of the mainstage events under #AribaLive.

Ariba plans to integrate its Ariba Commerce Cloud with IBM LotusLive to help buyers and sellers communicate and share information more fluidly and effectively, leading to faster, more confident business decisions, the companies said. Ariba plans to integrate IBM’s LotusLive with Ariba Discovery, a web-based service that helps buyers and sellers find each other quickly and automatically helps match buyers’ requirements to seller capabilities.

Both Ariba and IBM are recognizing the power and huge opportunity of being at the center of cloud-based commerce. And being at the center means allowing the participants to do the actual driving, to enable the community to seek and find natural partners via social interactions. We're likely to see the equivalent of app stores and social networks well up for B2B commerce, scaling both down and up, in the coming months and years.

What's now good for consumer commerce is soon to be good for the business side of the equation. It's simply the most efficient.



“The successful combination of LotusLive and the Ariba Commerce Cloud will provide such a matchmaking comfort zone in which networks of partners, suppliers and customers can easily work together across company boundaries to help do their jobs more efficiently and cost-effectively, and perhaps even develop lasting relationships," said Sean Poulley, Vice President, IBM Cloud Collaboration, in a release.

As Ariba Chairman and CEO Bob Calderoni says, what's now good for consumer commerce is soon to be good for the business side of the equation. It's simply the most efficient.

After IBM set its sights on Sterling, I at first wondered if IBM and Ariba might find themselves competing. But last Wednesday's deal shows that ecosystems rule. One-in-all cloud provider aspirants should take note. The way to making the network most valuable is by empowering the business (both sellers and buyers) to carve out what they want to do themselves.

IBM Lotus collaboration services plus Ariba's cloud and commerce network services seem to be striving to reach the right balance between providing a fertile arena and then getting out of the gardeners' way.

You may also be interested in:

With eye on cloud standard, Apprenda offers free downloadable version of SaaS application server

Apprenda, a software as a service (SaaS) middleware provider, is now offering a free downloadable SaaS stack that provides much of the functionality of its flagship SaaSGrid, an application server for on-demand apps.

The Clifton Park, NY company says that SaaSGrid Express, announced today, provides free access to a foundation with which to deliver .NET applications as mature SaaS offerings with a competitive cost-of-delivery profile.

The model appears simple. Use community and ISV economic imperatives to drive a de facto standard into the market, thereby seeding a strong business to the upgrade-path commercial version. The timing is great, as finding a way to enjoy cloud computing scale and economics is top of mind for ISVs and early-adopter enterprises.

All of the main features and functions of SaaSGrid application server have been ported to the self-installable Express edition. Apprenda says their product has drastically reduced time-to-market and capital requirements for independent software vendors (ISVs), SMBs and enterprises, with some customers experiencing 50 to 70 percent reductions in planned engineering time and associated costs.

I like to think of this as allowing for SaaS "stack bursting," which could easily augment cloud-bursting efforts, seeing as users of Apprenda SaaSGrid can run on Amazon EC2 as well as for on-premises virtualized workloads. The model might also work really well in migration efforts, of going from legacy apps to web and ultimately cloud deployments.

According to Apprenda CEO Sinclair Schuller some 93 percent of ISV-delivered applications have yet to make the transition to on-demand and SaaS-delivered. That's a lot of apps.

In addition to the complex architectural foundation afforded by SaaSGrid Express -- such as low effort multi-tenancy and resilient grid-scalability -- the product also provides some of the other “out of the box” application services found in the full-fledged commercial SaaSGrid offering, including:
  • Metering
  • Monetization
  • Subscription Management
  • Application lifecycle management
  • Cloud control
  • Billing
  • Customer provisioning
This new edition enables developers to quickly build, deploy and onboard customers to their .NET applications, letting them build significant revenue streams without any license costs.

Full-licensed version

ISVs leveraging the full-production-licensed SaaSGrid edition pay a per-server, per-month license fee, and benefit from full customization and branding capabilities, as well as additional features. This licensing model, known as the SaaSGrid Monthly Server License, includes free access to maintenance and software upgrades and comprehensive customer service (based upon the number of licensed servers).

In April, Gartner analyst Yefim Natis profiled Apprenda as one of the "Cool Vendors" in the application platform as a service (APaaS) space. He said:
Apprenda's support of ISV (or private cloud application project) requirements for developing and running SaaS-style offerings far exceed the functionality available with the recently delivered Windows Azure SDK. Apprenda's advanced support for fine-grained and adjustable use tracking and billing is particularly valuable for ISVs.

Apprenda achieves this breadth of capabilities largely unintrusively, in part by intercepting and extending the Web and the database communications of the application and in part by modifying the compiled application intermediate language code (adding value and some overhead in the process).
However, Natis does offer one caution:
Apprenda's current business is primarily focused on ISVs. Historically, the business opportunities of middleware providers servicing the ISV market have been limited. With time, the company must develop a product offering that targets enterprise IT cloud application projects as well in order to expand its business opportunities.
If the ISVs' needs and enterprise app migration efforts alone jump-start adoption of SaaSGrid Express it could make for a strong and clear path to clouds, from Amazon to Azure to the home-grown variety at an enterprise near you.

For more information on SaaSGrid Express and how to download it, visit the Apprenda web site: www.apprenda.com.

You may also be interested in:

Monday, May 31, 2010

BMC Software rolls out cloud-focused lifecycle management solution

Cloud computing is more than just a buzzword, yet it’s not quite mainstream. With its just-released Cloud Lifecycle Management offering, BMC Software is one of the companies working to bridge that gap and deliver on the promise of the managed cloud.

Cloud Lifecycle Management aims to help IT admins deliver and integrate cloud computing strategies more efficiently. It’s an IT management platform that promises more control and visibility in the cloud. And it’s seeing traction among some of the biggest names in high-tech, including Cisco, Dell, Fujitsu, NetApp, EMA, Red Hat and Blackbaud. Altogether, Houston-based BMC has inked more than $100 million in cloud foundation technology deals so far, the company says.

Next-gen cloud services

BMC Cloud Lifecycle Management not only aims to help enterprises build and operate private clouds more efficiently, it also offers opportunities to leverage external public cloud resources and makes way for service providers to develop and deliver cloud services. With so many tech industry heavy-hitters as partners and customers, it’s worth a closer look.

Here’s what BMC Cloud Lifecycle Management includes: a policy-driven service catalog that personalizes the list of available service offerings and customizations based on a user's role, a self-service Web portal for requesting and controlling private and public cloud resources, and dynamic provisioning of the entire service stack across heterogeneous infrastructures. The platform also offers out-of-the-box cloud management workflows that automate the assignment of compliance policies and performance monitoring tools, along with pre-built IT service management (ITSM) integration for ITIL process interaction and compliance.

eWeek notes that BMC Software was an original partner of Cisco System in the Unified Computing System initiative in 2009. eWeek continues:
In the original UCS partnership scheme, BMC provided the provisioning, change management and configuration software in the stack. Cisco, of course, provided the networking and a new central server.

EMC and NetApp provided the storage capacity, VMware and Microsoft added their virtualization layers—depending upon the choice of the customer—and Accenture shaped the individual product deployments for customers.

Since then, UCS has added vBlocks, smaller modules of some of the aforementioned components, which can be integrated on a smaller scale and are not as daunting as a full-blown forklift overhaul to existing midrange and enterprise IT systems. vBlocks, too, include BMC middleware.

During all this iteration, BMC has been taking copious notes and has come up with its own new cloud-computing layer, BMC Cloud Lifecycle Management, which works with just about all data center-system-maker components, not just the UCS.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

You may also be interested in:

Apptio launches demand-based forecasting for IT budget and spend management

Apptio is betting big on the market for demand-based budget forecasting. A new feature in its technology business management solutions software suite aims to help business managers plan and budget more accurately by inputting departmental forecasts into its software.

The Bellevue, Wash., company is calling it a “closed-loop” approach to financial planning, cost management, and transparency. The promised result: tighter alignment with business priorities, improved cost efficiency, and transparent reporting on the cost and value of IT services.

Michel Feaster, vice president of products at Apptio, is convinced the company’s closed-loop financial planning process will “close the gap between IT and the business” by letting companies update budgets and forecasts based on real business priorities.

“Demand-based forecasting gives IT the data it needs to respond more effectively, and plan accordingly with minimal variance so they aren’t over- or under-committing resources,” Feaster said.

Budgeting and planning = painful and inaccurate

Indeed, Apptio’s latest feature intends to remedy a notoriously painful and inaccurate IT budgeting and planning process. It was General Electric CEO Jack Welch who once said, “The budgeting process at most companies has to be the most ineffective practice in management. It sucks the energy, time, fun, and big dreams out of an organization. It hides opportunity and stunts growth.”

The budgeting process . . . hides opportunity and stunts growth.



Apptio’s demand-based forecasting works on the premise that past performance is not an indicator of future trends. Many variables can change and those changes can make a ripple effect across the organization’s IT services needs. In essence, Apptio’s demand-based forecasting is applying best practices from the supply chain management world to IT budgeting and planning.

Companies like Starbucks, Cisco, and Volkswagen are reporting savings with Apptio solutions to determine how changes in key business drivers affect IT services. In fact, Starbucks has seen $1.4 million in savings in nine months while Volkswagen reports a 50 percent reduction in annual budgeting costs through Apptio’s automation. Apptio believes the new demand-based forecasting will drive even stronger returns.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Tuesday, May 25, 2010

IBM adds Sterling Commerce to Websphere, expands scope of B2B integration

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

We should have seen this one coming. IBM’s offer to buy Sterling Commerce for $1.4 billion from AT&T on Monday closes a major gap in the WebSphere portfolio, extending IBM’s array of internal integrations externally to B2B.

It’s a logical extension, and IBM is hardly the first to travel this path: Software AG’s webMethods began life as a B2B integration firm before it morphed into EAI, later SOA and BPM middleware, before getting acquired by Software AG. In turn, TIBCO recently added Foresight Software as an opportunistic extension for taking advantage of a booming market in healthcare B2B transactions.

But neither Software AG’s or TIBCO’s moves approach the scope of Sterling Commerce’s footprint in B2B trading partner management, a business that grew out of its heritage as one of the major EDI (electronic data interchange) hubs.

The good news is the degree of penetration that Sterling has; the other (we won’t call it bad) news is all the EDI legacy, which provides great fodder for IBM’s Global Business Services arm to address a broader application modernization opportunity.

Sterling’s base has been heavily in downstream EDI and related trading partner management support for retailers, manufacturers, and transportation/freight carriers. Its software products cover B2B/EDI integration, partner onboarding into partner communities (an outgrowth of the old hub and spoke patterns between EDI trading partners), invoicing, payments, order fulfillment, and multi-channel sales.

In effect, this gets IBM deeper into the supply chain management applications market as it already has Dynamic Inventory Optimization (DIOS) from the Maximo suite (which falls under the Tivoli umbrella), not to mention the supply chain optimization algorithms that it inherited as part of the Ilog acquisition which are OEM’ed to partners (rivals?) like SAP and JDA.

[Editor's note: At the Ariba Live 2010 conference in Orlando, the news of an IBM-Sterling marriage was seen as making IBM more complementary to Ariba's spend management SaaS and cloud offerings. This does not necessarily put IBM and increasingly SaaS-based Ariba into a competitive stance. More on that to come ... Dana Gardner.]

A game changing event such as Apple’s iPad entering or creating a new market for tablet could provide the impetus for changes to products catalogs, pricing, promotions.

Asked if acquisition of Sterling would place IBM in competition with its erstwhile ERP partners, IBM reiterated its official line that it picks up where ERP leaves off – but that line is getting blurrier.

But IBM’s challenge is prioritizing the synergies and integrations. As there is still a while before this deal closes – approvals from AT&T shareholders are necessary first – IBM wasn’t about to give a roadmap. But they did point to one no-brainer: infusing IBM WebSphere vertical industry templates for retail with Sterling content. But there are many potential synergies looming.

At top of mind are BPM and business rules management that could make trading partner relationships more dynamic. There are obvious opportunities for WebSphere Business Modeler’s Dynamic Process Edition, WebSphere Lombardi Edition’s modeling, and/or Ilog’s business rules.

For instance, a game changing event such as Apple’s iPad entering or creating a new market for tablet clients could provide the impetus for changes to products catalogs, pricing, promotions, and so on; a BPM or business rules model could facilitate such changes as an orchestration layer that acts in conjunction with some of the Sterling multi-channel and order fulfillment suites. Other examples include master data management, which can be critical when managing sale of families of like products through the channel; and of course Cognos/BI, which can be used for evaluating the profitability or growth potential of B2B relationships.

Altimeter Group’s Ray Wang voiced a question that was on many of our minds: Why AT&T would give up Sterling? IBM responded about the potential partnership opportunities but to our mind, AT&T has its hands full attaining network parity with Verizon Wireless and is just not a business solutions company.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Thursday, May 20, 2010

HP shows benefits from successful application consolidation with own massive global supply chain project

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Access more information on Application Consolidation.
Read the full-length case study on HP's Application Consolidation.
Learn more about the Application Transformation Experience Workshop.

Our latest BriefingsDirect interview is with an executive from HP to look at proper planning and execution for massive application-consolidation projects, specifically by examining an HP project itself.

By unpacking this multi-year application consolidation project across global supply chains, we can learn about best practices and execution accelerators for such projects, which often involve hundreds of applications and impact thousands of people.

These are by no means trivial projects, and often involve every aspect of IT, as well as require a backing of the business leadership and the users to be done well. The goal through these complex undertakings is to radically improve how applications are developed, managed, and governed across their lifecycle to better support dynamic business environments. The stakes, therefore, are potentially huge for both IT and the business.

The telling case-study, the Global Part Supply Chain project at HP, was initially undertaken in 2006 but typically became bogged down by sheer scale and complexity. After some changes in management approach and governance, however, the project quickly became hugely successful.

We learn how and why from Paul Evans, Worldwide Marketing Lead on Applications Transformation at HP. The interview is conducted by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Evans: We have always said that the experiences we gain from our own work we would share openly, and sometimes we’re quite happy to say where we did go wrong. In this instance, we’ve written up a case study to give people an insight in more detail than I could possibly provide today. We're going to post that on our portal. If people want to go there, it’s relatively simple: HP's Application Consolidation case study.

There are so many lessons learned here, addressing what people have in terms of portfolio and then also delivering new, contemporary, revised types of applications and/or infrastructure. They’ll find videos and other materials of other customers who have embarked on these journeys, whether they’ve been driving that from the top down, from an application’s nature, or whether it’s people who are coming in from the infrastructure.

As you can imagine, HP is an extremely large organization. It makes products, as well as sells services, etc. In terms of product, just imagine your average PC, or your average server, and think of the number of components that are made up inside of that device. It runs into hundreds of thousands, whether it's memory chips, disk drives, screens, keyboards, or whatever.

For a company like HP, in the event that someone needs a spare part for whatever reason, they don't expect to wait a significant period of time for it to turn up. They want it delivered 24 hours later by whatever means that suits them.

So, it's essential for us to have that global supply chain of spare parts tailored toward the ones that we believe we need more -- rather than less -- and that we can supply those parts quickly and easily and, at the same time, cost effectively. That's important for any organization that is dealing in physical components or in the provision of a service. You want to maintain customer satisfaction or increased customer satisfaction.

Customer centric

For us, it was essential that a massive global supply chain organization was extremely customer-centric, but at the same time, very cost-effective. We were doing our utmost to reduce costs, increase the agility of the applications to service the customers, and fuel growth, as our organization and our business grows. The organization has got to respond to that.

So the primary reasoning here was that this is a large organization, dealing with multiple components with pressures on it both from the business and the IT sides.

One of the primary reasons we had to do this is that HP has been an amalgam of companies like Hewlett-Packard, originally, Compaq, Tandem, DEC. All of these organizations had their own bills of materials, their own skills, and basically this thing has just grown like Topsy.

What we were trying to do here was to say that we just couldn't continue to treat these systems as un-integrated. We had a lot of legacy environments that were expensive to run, a lot of redundancy, and a lot of overlap.

The whole notion of this coming about through mergers and acquisitions is very common in the marketplace. It's not unique just to HP.



The goal here clearly was to produce one integrated solution that treated the HP customer as an individual, and in the back-end consolidated the applications -- the ones we really needed to move forward. And also, a goal was to retire those applications that were no longer necessary to support the business processes.

The whole notion of this coming about through mergers and acquisitions is very common in the marketplace. It's not unique just to HP. The question of whether you just live with everybody’s apps or you begin to consolidate and rationalize is a major question that customers are asking themselves.

From the IT side, there was clearly a view from the top down that said living with 300 applications in the supply-chain world was unacceptable. But also from the business side, the real push was that we had to improve certain metrics. We have this metric called Spend-to-Revenue ratio which is, in fact, what are we spending for parts as opposed to what we are getting in terms of revenue? We were clearly below par in those spaces.

We had some business imperatives that were driving this project that said we needed to save money, we needed to be able to deliver faster, and we needed to be able to do it more reliably. If we tell a customer they're going to get the part within 24 hours, we deliver in 24 hours -- not 36 or 48, because we weren't quite sure where it was. We had to maintain the business acumen.

The rationalization that has taken place inside HP around its IT organization and technology is that because we are human beings, most people think in a very siloed way.

They see their suite of applications supporting their business. They like them. They love them. They’ve grown up with them, and they want to continue using them. Their view is, "Mine is perfect to suit my business requirement. Why would I need anything else?"

That's okay, when you're very close to the coalface. You can always make decisions and always deem to the fact that the applications you use are strategic -- an interesting word that a lot of people use. But, as you zoom out from that environment and begin to get a more holistic view of the silos, you can begin to see that the duplication and replication is grossly inefficient and grossly expensive.

So, our whole goal here was to align business and IT in terms of a technological response to a business driver.

When we submitted the project, we were basically driving it by committee. Individual business units were saying, "I need applications x, y, z." Another group says, "Actually, we need a, b, c." There was virtually no ability to get to any consensus. The goal here is to go from 300 apps to 30 apps. We’re never going to do it, if you could all self-justify the applications you need.

What we did was discard the committee approach. We took the approach, basically led by one person from the business side, who had supply chain experience, and one from the IT side who had supply chain experience, but both had their specialist areas. These two people were the drivers. The buck stopped with these people. They had to make the big decisions.

To support them, they had a sponsorship committee of senior executives, to which they could always escalate, if there was a problem making a final decision about what was necessary.

Randy Mott, the HP CIO, has the direct support of Mark Hurd, the HP chairman and CEO. In my experience, that's absolutely essential in any project a customer undertakes. They have to have executive sponsorship from the top.

If you don't, any time you get to an impasse, there's no way out. It just distills into argument and bickering. You need somebody who's going to make the decision and says, "We're going this way and we're not going that way."

Getting on track

So for us, setting up this whole governance team of two people to make the hard decisions, and their being supported by a project management team who are there to go off and enact the decisions that were made was the way we really began to move this project forward, get it on track, get it on time, and get it in on budget.

When we started by saying let's have a big committee to help my decisions, it was the wrong approach. We were going nowhere. We had to rationalize and say "no."

Access more information on Application Consolidation.
Read the full-length case study on HP's Application Consolidation.
Learn more about the Application Transformation Experience Workshop.

Two respected individuals, one from the IT side and one from the business side, who were totally aligned on what they were doing, shared the same vision in what they were trying to achieve. By virtue of that, we could enforce throughout decisions, sometimes unpopular.

We had to focus on driving this both from business and IT. As I said in this example, we went from 300 apps to 30 apps. We had a 39 percent reduction in our inventory dollars. We reduced our supply chain expenses. We reduced the cost of doing next day delivery. We're heading toward reducing our CO2 emissions by 40 percent on those next-day deliveries.

But overall, the global supply chain, this measure of spent revenue, we drove down by 19 percent. We're running a better, faster, cheaper organization that is more agile. As you said, it positions us better to exploit situations as they change and feel that they’ve become more of an opportunity rather than a threat.

We'd like to think that those organizations that are out there with a supply chain challenge could now look at this and say, "Maybe we could do the same thing." Definitely the alignment between business and IT is probably one of the most paramount of facets. Let me do with which platform, which network, which disk drive, or which operating system. You can have a lot of fun with that. But, in this instance, a lot of the success was driven by setting up the right governance and decision-making structure with the right sponsorship.

Over the last 12 months what people have realized that it is now time for those organizations that want to remain competitive and innovative. Unfortunately, I still see a lot of companies that believe that doing nothing is the thing to do and will just wait for the economy to rebound. I don't believe it's going to rebound to the same place. It may come back and it may be stronger, but it may end up on a different place.

The organizations that are not waiting, but are trying to be innovative, competitive, move away from the competition, and give themselves some breathing space are the ones who are going to sustain themselves.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Access more information on Application Consolidation.
Read the full-length case study on HP's Application Consolidation.
Learn more about the Application Transformation Experience Workshop.

You may also be interested in:

Wednesday, May 19, 2010

Kapow delivers Web Data Server 7.2 to make BI easier to extract from across web-based activities

Aiming to meet the needs of enterprises building, testing and deploying web data services, Kapow Technologies last week released Web Data Server 7.2, the lastest iteration of its flagship software.

The update features a new design studio to develop, test, deploy and manage web data services. Developers can review collected data with any browser, and a Web-based scheduling interface allows for timing and automating data retrieval. Users can therefore easily develop and deploy automated processes, or "robots," said Kapow.

The value behind the updated version centers on real-time, actionable business intelligence (BI). Web Data Server 7.2 automates access to and integration with any Web data source, from Web applications inside the firewall to data stored in the cloud to data found across the Web. [Disclosure: Kapow is a sponsor of BriefingsDirect podcasts.]

Serving the needs of both IT and line-of-business users, Web Data Server 7.2 saves time and resources in the quest for actionable BI by significantly shortening application and data integration project timelines.

The web-based data explosion

“The proliferation of Web-based data, both inside and outside the company, continues to explode, providing enterprises with remarkable potential to leverage Web data services for market insights and analytics,” said Stefan Andreasen, CTO of Kapow Technologies.

Web Data Server 7.2 offers some interesting features that could turn the heads of BI and social media analytics and trends gathering practitioners looking for a user-friendly solution that provides quick results.

Sneak peak at Web Data Server 7.2

Among the new features in Web Data Server 7.2 Design Studio is a Data Viewer that lets users see collected data within the Design Studio and load it directly into Microsoft Excel, perhaps the predominant BI results delivery interface on the planet.

The proliferation of Web-based data, both inside and outside the company, continues to explode.


In other Kapow developments, the Palo Alto, Calif. firm was recently recognized as a Laureate by IDG's Computerworld Honors Program.

Web Data Server 7.2 also offers Native XML Support, FTP and File System Interaction, new converters to XML, JSON and CSV formats, improved database functionality, enhanced production monitoring and more than 50 other improvements that significantly enhance the robot development, deployment and management experience, said Kapow.

Building on Kapow's mashups strengths, Kapow Web Data Server 7.2 includes updates to Kapow’s browser and Javascript engine to handle complex, dynamic web sources driven by Ajax and Google Web Toolkit. Finally, improvements were included in the browser-based management and scheduling console, including production monitoring and notifications, and new logging functionality for databases, Log4J and e-mail.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in: