Friday, June 4, 2010

Analysts probe future of client architectures as HTML 5 and client virtualization advances loom over desktops

Listen to the podcast. Find it on iTunes/iPod and Download the transcript, or read a full copy. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at

The latest BriefingsDirect Analyst Insights Edition, Vol. 52, focuses on client-side architectures and the prospect of heightened disruption in the PC and device software arenas.

Such trends as cloud computing, service oriented architecture (SOA), social media, software as a service (SaaS), and virtualization are combining and overlapping to upset the client landscape. If more of what more users are doing with their clients involves services, then shouldn't the client be more services ready? Should we expect one client to do it all very well, or do we need to think more about specialized clients that might be configured on the fly?

Today's clients are more tied to the past than the future, where one size fits all. Most clients consist of a handful of entrenched PC platforms, a handful of established web browsers, and a handful of PC-like smartphones. But, what has become popular on the server, virtualization, is taken to its full potential on these edge devices. New types of dynamic and task specific client types might emerge. We'll take a look at what they might look like.

Also, just as Windows 7 for Microsoft is quickly entering the global PC market, cloud providers are in an increasingly strong position to potentially favor certain client types or data and configuration synchronization approaches. Will the client lead the cloud or vice versa? We'll talk about that too.

Either way, the new emphasis seems to be on full-media, webby activities, where standards and technologies are vying anew for some sort of a de-facto dominance across both rich applications as well as media presentation capabilities.

We look at the future of the client with a panel of analysts and guests: Chad Jones, Vice President for Product Management at Neocleus; Michael Rowley, CTO of Active Endpoints; Jim Kobielus, Senior Analyst at Forrester Research; Michael Dortch, Director of Research at Focus; JP Morgenthal, Chief Architect, Merlin International, and Dave Linthicum, CTO, Bick Group. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Jones: In the client market, it's time for disruption. Looking at the general PC architectures, we have seen that since pretty much the inception of the computer, you really still have one operating system (OS) that's bound to one machine, and that machine, according to a number of analysts, is less than 10 percent utilized.

Normally, that's because you can't share that resource and really take advantage of everything that modern hardware can offer you. Dual cores and all the gigabytes of RAM that are available on the client are all are great things, but if you can't have an architecture that can take advantage of that in a big way, then you get more of the same.

On the client side, virtualization is moving into all forms of computing. We've seen that with applications, storage, networks, and certainly the revolution that happened with VMware and the hypervisors on the server side. But, the benefits from the server virtualization side were not only the ability to run multiple OSs side-by-side and consolidate servers, which is great, but definitely not as relevant to the client side. It’s really the ability to manage the machine at the machine level and be able to take OSs and move them as individual blocks of functionality in those workloads.

The same thing for the client can become possible when you start virtualizing that endpoint and stop doing management of the OS as management of the PC, and be able to manage that PC at the root level.

Imagine that you have your own personal Windows OS, that maybe you have signed up for Microsoft’s new Intune service to manage that from the cloud standpoint. Then, you have another Google OS that comes down with applications that are specific from that Google service, and that desktop is running in parallel with Windows, because it’s fully controlled from a cloud provider like Google. Something like Chrome OS is truly a cloud-based OS, where everything is supposed to be stored up in the cloud.

Those kinds of services, in turn, can converge into the PC, and virtualization can take that to the next level on the endpoint, so that those two things don’t overlap with each other, and a level of service, which is important for the cloud, certainly for service level agreements (SLAs), can truly be attained. There will be a lot of flexibility there.

Virtualization is a key enabler into that, and is going to open up PC architectures to a whole brave new world of management and security. And, at a platform level, there will be things that we're not even seeing yet, things that developers can think of, because they have options to now run applications and agents and not be bound to just Windows itself. I think it’s going to be very interesting.

With virtualization, you have a whole new area where cloud providers can tie in at the PC level. They'll be able to bundle desktop services and deliver them in a number of unique ways.

Linthicum: Cloud providers will eventually get into desktop virtualization. It just seems to be the logical conclusion of where we're heading right now.

In other words, we're providing all these very heavy-duty IT services, such as database, OSs, and application servers on demand. It just makes sense that eventually we're going to provide complete desktop virtualization offerings that pop out of the cloud.

The beauty of that is that a small business, instead of having to maintain an IT staff, will just have to maintain a few clients. They log into a cloud account and the virtualized desktops come down.

It provides disaster recovery based on the architecture. It provides great scalability, because basically you're paying for each desktop instance and you're not paying for more or less than you need. So, you're not buying a data center or an inventory of computers and having to administer the users.

That said, it has a lot more cooking to occur, before we actually get the public clouds on that bandwagon. Over the next few years, it's primarily going to be an enterprise concept and it's going to be growing, but eventually it's going to reach the cloud.

There are going to be larger companies. Google and Microsoft are going to jump on this. Microsoft is a prime candidate for making this thing work, as long as they can provide something as a service, which is going to have the price point that the small-to-medium-sized businesses (SMBs) are going to accept, because they are the early adopters.

Browser-based client

Rowley: When we talk about the client, we're mostly thinking about the web-browser based client as opposed to the client as an entire virtualized OS. When you're using a business process management system (BPMS) and you involve people, at some point somebody is going to need to pull work off of a work list and work on it and then eventually complete it and go and get the next piece of work.

That’s done in a web-based environment, which isn’t particularly unusual. It's a fairly rich environment, which is something that a lot of applications are going to. Web-based applications are going to a rich Internet application (RIA) style.

We have tried to take it even a step further and have taken advantage of the fact that by moving to some of these real infrastructures, you can do not just some of the presentation tier of an application on the client. You can do the entire presentation tier on the web browser client and have its communication to the server, instead of being traditional HTML, have the entire presentation on the browser. Its communication uses more of a web-service approach and going directly into the services tier on the server. That server can be in a private cloud or, potentially, a public cloud.

You go directly from your browser client into the services tier on the server, and it just decreases the overall complexity of the entire system.

What's interesting is that by not having to install anything on the client, as with any of these discussions we are talking about, that's an advantage, but also on the server, not having to have a different presentation tier that's separate from your services tier.

You go directly from your browser client into the services tier on the server, and it just decreases the overall complexity of the entire system. That's possible, because we base it on Ajax, with JavaScript that uses a library that's becoming a de-facto standard called jQuery. jQuery has the power to communicate with the server and then do all of the presentation logic locally.

... I believe that Apple, growing dominant in the client space with both the iPhone and now the iPad, and its lack of support for either Silverlight or Flash, will be a push toward the standard space, the HTML5 using JavaScript, as the way of doing client-based rich Internet apps. There will be more of a coalescing around these technologies, so that potentially all of your apps can come through the one browser-based client.

Dortch: ... There are going to continue to be proprietary approaches to solving these problems. As the Buddhists like to say, many paths, one mountain. That's always going to be true. But, we've got to keep our eyes on the ultimate goal here, and that is, how do you deliver the most compelling services to the largest number of users with the most efficient use of your development resources?

Until the debate shifts more in that direction and stops being so, I want to call it, religious about bits and bytes and speeds and feeds, progress is going to be hampered. But, there's good news in HTML5, Android, Chrome, and those things. At the end of the day, there's going to be a lot of choices to be made.

The real choices to be made right now are centered on what path developers should take, so that, as the technologies evolve, they have to do as little ripping and replacing as possible. This is especially a challenge for larger companies running critical proprietary applications.

Morgenthal: I like to watch patterns. Look at where more applications have been created in the past three years, on what platform, and in what delivery mechanism than in any other way. Have they been web apps or have they been iPhone/Android apps?

You've got to admit that the web is a great vehicle for pure dynamic content. But, at the end of the day, when there is a static portion of at least the framework and the way that the information is presented, nothing beats that client that’s already there going out and getting a small subset of information, bringing it back, and displaying it.

I see us moving back to that model. The web is great for a fully connected high-bandwidth environment.

I've been following a lot about economics, especially U.S. economics, how the economy is going, and how it impacts everything. I had a great conversation with somebody who is in finance and investing, and we joked about how people are claiming they are getting evicted out of their homes. Their houses and homes are being foreclosed on. They can barely afford to eat. But, everybody in the family has an iPhone with a data plan.

Look what necessity has become, at least in the U.S., and I know it's probably similar in Korea, Japan, and parts of Europe. Your medium for delivery of content and information is that device in the palm that's got about a 300x200 display.

On the desktop, you have Adobe doing the same thing with AIR and its cross-platform, and it's a lot more interactive than some of the web stuff. JavaScript is great, but at some point, you do get degradation in functionality. At some point, you have to deliver too much data to make that really effective. That all goes away, when you have a consistent user interface (UI) that is downloadable and updatable automatically.

I have got a Droid now. Everyday I see that little icon in the corner; I have got updates for you. I have updated my Seismic three times, and my USA Today. It tells me when to update. It automatically updates my client. It's a very neutral type of platform, and it works very, very well as the main source for me to deliver content.

Virtualization is on many fronts, but I think what we are seeing on the phone explosion is a very good point. I get most of my information through my phone.

Now, sometimes, is that medium too small to get something more? Yeah. So where do I go? I go to my secondary source, which is my laptop. I use my phone as my usual connectivity medium to get my Internet.

So, while we have tremendous broadband capability growing around the world, we're living in a wireless world and wireless is becoming the common denominator for a delivery vehicle. It's limiting and controlling what we can get down to the end user in the client format.

Getting deconstructed

Kobielus: In fact, it's the whole notion of a PC being the paradigm here that's getting deconstructed. It has been deconstructed up the yin yang. If you look at what a PC is, and we often think about a desktop, it's actually simply a decomposition of services, rendering services, interaction services, connection and access, notifications, app execution, data processing, identity and authentication. These are all services that can and should be virtualized and abstracted to the cloud, private or public, because the clients themselves, the edges, are a losing battle, guys.

Try to pick winners here. This year, iPads are hot. Next year, it's something else. The year beyond, it's something else. What's going to happen is -- and we already know it's happening -- is that everything is getting hybridized like crazy.

All these different client or edge approaches are just going to continue to blur into each other. The important thing is that the PC becomes your personal cloud. It's all of these services that are available to you. The common denominator here for you as a user is that somehow your identity is abstracted across all the disparate services that you have access to.

All of these services are aware that you are Dave Linthicum, coming in through your iPad, or you are Dave Linthicum coming in through a standard laptop web browser, and so forth. Your identity and your content is all there and is all secure, in a sense, bringing process into there.

You don't normally think of a process as being a service that's specific to a client, but your hook into a process, any process, is your ability to log in. Then, have your credentials accepted and all of your privileges, permissions, and entitlements automatically provisioned to you.

Identity, in many ways, is the hook into this vast, personal cloud PC. That’s what’s happening.

Rowley: A lot of applications will really mix up the presentation of the work to be done by the people who are using the application, with the underlying business process that they are enabling.

If you can somehow tease those apart and get it so that the business process itself is represented, using something like a business process model, then have the work done by the person or people divided into a specific task that they are intended to do, you can have the task, at different times, be hosted by different kinds of clients.

Different rendering

r, depending on the person, whether they're using a smartphone or a full PC, they might get a different rendering of the task, without changing the application from the perspective of the business person who is trying to understand what's going on. Where are we in this process? What has happened? What has to happen yet? Etc.

Then, for the rendering itself, it's really useful to have that be as dynamic as possible and not have it be based on downloading an application, whether it's an iPhone app or a PC app that needs to be updated, and you get a little sign that says you need to update this app or the other.

When you're using something like HTML5, you can get it so that you get a lot of the functionality of some of these apps that currently you have to download, including things, as somebody brought up before, the question of what happens when you aren't connected or are on partially connected computing?

Up until now, web-based apps very much needed to be connected in order to do anything. HTML5 is going to include some capabilities around much more functionality that's available, even when you're disconnected. That will take the technology of a web-based client to even more circumstances, where you would currently need to download one.

It's a little bit of a change in thinking for some people to separate out those two concepts, the process from the UI for the individual task. But, once you do, you get a lot of value for it.

Jones: I can see that as part of it as well. When you're able to start taking abstraction of management and security from outside of those platforms and be able to treat that platform as a service, those things become much greater possibilities.

Percolate and cook

believe one of the gentlemen earlier commented that a lot of it needs some time to percolate and cook, and that’s absolutely the case. But, I see that within the next 10 years, the platform itself becomes a service, in which you can possibly choose which one you want. It’s delivered down from the cloud to you at a basic level.

That’s what you operate on, and then all of those other services come layered in on top of that as well, whether that’s partially through a concoction of virtualization and different OS platforms, coupled with cloud-based profiles, data access, applications and those things. That’s really the future that we're going to see here in the next 15 years or so.

... For the near-term, as the client space begins to shake out over the next couple of years, the immediate benefits are first around being able to take our deployment of at least the Windows platform, from a current state of, let's either have an image that's done at Dell or more the case, whenever I do a hardware refresh, every three to four years, that's when I deploy the OS. And, we take it to a point where you can actually get a PC and put it onto the network.

You take out all the complexity of what the deployment questions are and the installation that can cause so many different issues, combined with things like normalizing device driver models and those types of things, so that I can get that image and that computer out to the corporate standard very, very quickly, even if it's out in the middle of Timbuktu. That's one of the immediate benefits.

Plus, start looking at help desk and the whole concept of desktop visits. If Windows dies today, all of your agents and recovery and those types of things die with it. That means I've got to send back the PC or go through some lengthy process to try to talk the user through complicated procedures, and that's just an expensive proposition.

Still connect

You're able to take remote-control capabilities outside of Windows into something that's hardened at the PC level and say, okay, if Windows goes down, I can actually still connect to the PC as if I was local and remote connect to it and control it. It's like what the IP-based KVMs did for the data center. You don’t even have to walk into the data center now. Imagine that on a grand scale for client computing.

Couple in a VPN with that. Someone is at a Starbucks, 20 minutes before a presentation, with a simple driver update that went awry and they can't fix it. With one call to the help desk, they're able to remote to that PC through the firewalls and take care of that issue to get them up and working.

Those are the areas that are the lowest hanging fruit, combined with amping up security in a completely new paradigm. Imagine an antivirus that works, looking inside of Windows, but operates in the same resource or collision domain, an execution environment where the virus is actually working, or trying to execute.

There is a whole level of security upgrades that you can do, where you catch the viruses on the space in between the network and actually getting to a compatible execution environment in Windows, where you quarantine it before it even gets to an OS instance. All those areas have huge potential.

You have got to keep that rich user experience of the PC, but yet change the architecture, so that it could become more highly manageable or become highly manageable, but also become flexible as well.

Imagine a world, just cutting very quickly in the utility sense, where I've got my call center of 5,000 seats and I'm doing an interactive process, but I have got a second cord dedicated to a headless virtual machine that’s doing mutual fund arbitrage apps or something like that in a grid, and feeding that back. You're having 5,000 PCs doing that for you now at a very low cost rate, as opposed to building a whole data center capacity to take care of that. Those are kind of the futures where this type of technology can take you as well.
Listen to the podcast. Find it on iTunes/iPod and Download the transcript, or read a full copy. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at

You may also be interested in:

Thursday, June 3, 2010

Panda Security upgrades cloud-based anti-malware service to include auto updates

As more computing functions continue to exploit cloud delivery models, security issues remain a key concern. But the cloud also continues to be the solution to its own problem.

Extending it's cloud-based PC security and anti-malware services, Panda Security today moved to help further alleviate malware fears by expanding its free offerings to include a paid version that automates the updates and upgrades to the service. [Disclosure: Panda Security is a sponsor of BriefingsDirect podcasts.]

Dubbed Panda Cloud Antivrus Pro, the new edition works to protect computer users online and offline by extending the protections in the free product launched last year. The Free Edition is still available and also offers enhanced functions, while the Pro Edition sells for $29.95 and offers both automated updates as well as support benefits and other features.

Minimal performance impact

Besides being a popular free cloud security service for home users (about 10 million consumers have downloaded the free version to date), Panda Antivirus pushes another attention-getting message: minimal impact on computing performance. That has helped bring the service into use among SOHO, SMB and even some enterprise users.

Panda Antivirus relies on a proprietary technology for automatically collecting and processing millions of malware samples in the cloud, rather than locally on the consumer’s PC. The technology and method, called Collective Intelligence, can swiftly ID and thwart malware as it appears anywhere on the Internet and then update the clients with the fix.

Because the processing is largely done via cloud-based data centers, the client-bourn antivirus software uses a mere 15MB of RAM compared with the 60MB of RAM traditional signature-based antivirus products typically use. It also puts a loss less workload on the processor(s).

Panda Security is pushing the speed superiority of its Collective Intelligence platform in protecting PCs against both known and unknown malware. The company points to recent tests by that compared leading antivirus programs. In those tests, Panda Cloud Antivirus outperformed the average zero day detection score of competitors by 42.5 percent, said Panda.

New functions and features

The Free Edition of Panda Cloud Antivirus offers some advanced configurations that let users customize certain features, like behavioral blocking and analysis, to meet the requirements of their systems. The Free Edition now also includes a behavioral blocker that protects against new malware and targeted attacks, as well as self-protection of antivirus files and configurations that prevent targeted malware attacks from disabling the software.

The Pro Edition offers all that and more, including automatic upgrades and automatic vaccination of USB and hard drives to eliminate the possibility of transmitting infections while users are offline and/or physically mobile. The Pro Edition also offers dynamic behavioral analysis to add a additional layer of protection by analyzing running processes and blocking any malicious behavior.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at and
You may also be interested in:

Wednesday, June 2, 2010

WSO2 tailors open-source middleware platform for cloud-based applications, deployment models

WSO2 today announced the debut of WSO2 Stratos, an open-source middleware platform for cloud-based enterprise applications.

The on-demand or on-premises Stratos platform fosters building and deploying applications and services specifically for platform as a service (PaaS)-type deployments. Stratos goes beyond plain vanilla PaaS, however, by automating provisioning of enterprise servers, including the portal, enterprise service bus (ESB), and application servers. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

The announcement marks WSO2's entry into the emerging market for enterprise PaaS, as well as enabling hybrid computing models. Using Stratos, applications can be created or migrated on-premise, to a private cloud, or to the public cloud for potentially unprecedented deployment flexibility.

I like to call this flexibility fungibility of applications and services, meaning the apps can be moved from various cloud models across various cloud providers and platforms with minimal rework and configuration headaches. We're ways off from cloud applications fungibility, but users should be demanding it, and therefore resisting cloud lock-in. Open source is an important part of cloud fungibility, but more standards are needed.

Through its integration layer, WSO2 Stratos installs onto any existing cloud infrastructure -- Eucalyptus, Ubuntu Enterprise Cloud, Amazon Elastic Computing Cloud (EC2), and VMware ESX, to name a few -- meaning enterprises can let the market work for them, and resist being lockeded into a specific infrastructure provider or platform.

“At a time when IT developers can create a new application in one month, taking months to provision and deploy servers and systems no longer makes strategic or economic sense,” said Dr. Sanjiva Weerawarana, WSO2 founder and CEO. “WSO2 Stratos provides a complete middleware platform for delivering robust applications on private clouds, as well as migrating between and integrating with public clouds and on-premise systems—and there’s never cloud lock-in.”

Once Stratos is installed, users get a Web-based management portal where they can configure, manage and govern independent, but consistent, servers for each department, or for each stage of a system’s lifecycle. Each server is completely virtual, scaling up automatically to handle the required load, and metered and billed according to use.

At the heart of the WSO2 Stratos PaaS is a Cloud Manager, which manages all other services and offers a portal where users can log in and register their domain (tenant), manage their account, and configure the middleware services that are available for their users. The Cloud Manager offers point-and-click simplicity for configuring and provisioning middleware services, so developers can get started immediately and focus on the business logic.

WSO2 Carbon

Stratos is built on top of and extends WSO2 Carbon, the company's componentized middleware platform. WSO2 last month announced Carbon 3.0, which lets developers point-and-click to tailor middleware functionality into a customized solution.

The new WS-Discovery support automates the configuration of a project spanning multiple Web service endpoints. WSO2 Carbon 3.0 also features enhanced integration of the WSO2 Governance Registry across the platform, facilitating governance and monitoring across large clustered deployments and cloud implementations.

At a time when IT developers can create a new application in one month, taking months to provision and deploy servers and systems no longer makes strategic or economic sense.

The Carbon 3.0 Component Manager features a checkbox user interface (UI) where IT professionals start with a lean core and can click to add the functionality they want to their WSO2 middleware products—choosing from among more than 150 features.

WS-Discovery support allows Carbon 3.0 to automatically discover nearby service endpoints, freeing IT professionals from much of the rewiring work typically required when deploying a new set of services or moving existing ones. This facilitates the ability to move deployments between different servers, private clouds, or public clouds.

Enhanced Governance Registry integration across the entire Carbon 3.0 middleware platform increases the ability to govern and monitor large-scale deployments, including clustered servers and cloud implementations.

Availability and support

SO2 Stratos is available today as an early adopter release for private clouds, as a demonstration version on public clouds, and as an early release of the downloadable open source software.

WSO2 also today launched the WSO2 Cloud Partnership Initiative around WSO2 Stratos. WSO2 is partnering with systems integrators (SIs) and infrastructure-as-a-service (IaaS) providers to streamline the development and deployment of applications and services for enterprise clouds.

WSO2 is providing a "fast-track path" for SIs to use WSO2 Stratos for cloud-enabling their customers’ existing applications and services, building and delivering new SaaS offerings, and creating new vertical PaaS/SaaS templates to support industry-specific applications and services, said WSO2. SIs that join the initative gain complementary training, including set-up of a pilot private cloud based on WSO2 Stratos; revenue sharing on initial WSO2 Stratos-based deployments; and a commission on recurring production support revenue.

SIs Cognizant Technology Solutions and WebScience, one of Italy’s leading providers of technology and consulting services, have already joined, said WSO2.

WSO2 is also establishing partnerships with leading IaaS providers including Amazon Web Services, Canonical/Ubuntu, and VMware.

WSO2 Carbon 3.0 middleware products are available as software downloads and as WSO2 Cloud Virtual Machines running on the Amazon Elastic Computing Cloud (EC2), Linux Kernel Virtual Machine (KVM), or VMware ESX. As fully open source solutions released under the Apache License 2.0, WSO2 SOA middleware products do not carry any licensing fees.

In conjunction with WSO2 Stratos, WSO2 is launching its new CloudStart Program. The service, priced at $17,500, provides an engineering team onsite for a week to deploy WSO2 Stratos on either Ubuntu Enterprise Cloud or the customer’s existing cloud infrastructure. Working hand-in-hand with the customer development team, the WSO2 experts build a lightweight implementation or proof-of-concept. They then follow up the onsite engagement with offsite development support.

You may also be interested in:

What can businesses learn about predictive analytics from American Idol?

This guest post comes courtesy of Rick Kawamura, Director of Marketing at Kapow Technologies.

By Rick Kawamura

Social media data continues to grow at astronomical rates. Last year Twitter grew 1,444 percent with over 50 million tweets sent each day, and Facebook now has over 400 million active users. Every minute, 600 new blog posts are published, 34,000 tweets are sent, and 240,000 pieces of content are shared on Facebook.

The numbers are absolutely astounding. But is social media data credible? And can tangible business intelligence (BI) be extracted from it? [Disclosure: Kapow Technologies is a sponsor of BriefingsDirect podcasts.]

Reality Buzz, a new social media analysis project powered by web data services technology, was created to answer this very question by examining if real-time analysis of social media conversations can predict the outcome of popular reality television shows like American Idol and Dancing with the Stars. After Reality Buzz collected tens of thousands of tweets, comments, and discussions about contestants on both programs each week and applied sentiment analysis to the data, there was very clear, data driven insight to predict the contestants to be eliminated.

Stepping outside the example of "reality" TV, social media sentiment can be a powerful source of data that arms organizations with real-time intelligence to make more strategic business decisions. Based on experience with Reality Buzz, here are five tips for extracting real value from social media data:
Data trumps conventional wisdom

While Malcolm Gladwell, author of Blink: The Power of Thinking Without Thinking, would say otherwise, data-driven business decisions definitely outperform guesswork. Week after week on Dancing with the Stars, the infamous Kate Gosselin held up to 40 percent of all conversations in social media. Unfortunately for Kate, 95 percent of those comments were negative.

Conventional wisdom said that she should pack her bags. Yet the data showed despite all the negative conversations, she still had more share of positive comments than several other contestants, meaning she was far less likely to be eliminated. Because viewers vote for contestants they’d like to keep on the show, there is a strong correlation to positive sentiment. It wasn’t until the fourth week that Kate’s volume of positive comments died down and she was voted off.

Product managers deal with this dilemma all the time. Tasked with determining the next set of product features to drive greater profitability, they have to manage the CEO’s gut feel while also satisfying the needs of those who have to sell it, both of whom want it better, cheaper and faster. But “better, cheaper, faster” isn’t a great long-term strategy. A great product manager would look to the data to find unmet needs and untapped markets, and social media is a great place to find these hidden nuggets of intelligence.

Timing is critical

Any data over 24 hours old is pretty much worthless for predicting who will be eliminated from a reality TV show. The same holds true in the business world, where it’s imperative for the data to be as close to an event as possible, as this data has the strongest effect on sentiment.

Weeks old data may prove costly, resulting in more damage to the brand and revenue.

When launching a new product, for example, companies need to consider sentiment immediately prior to and after the launch. The same applies to a marketing campaign. Say Toyota releases a full page ad in The Wall Street Journal only to get a report on sentiment a few weeks later. Worthless. Companies need to know their customer’s sentiment just before they publish the ad to create the most relevant message, and immediately following to measure its resonance with their audience. Weeks-old data may prove costly, resulting in more damage to the brand and revenue by further demonstrating lack of understanding and responsiveness to frustrated customers.

Don’t be blind to the noise factor

It’s easy to understand trends, changes in momentum, volume of traffic, and ratio of positive to negative sentiment. However there is a lot of noise that can easily skew the data, especially with large, very public shows like American Idol. The bigger the show, product, etc., the more noise. This is most prominent in Twitter, which very often represents the largest source and volume of data. Despite the noise, though, there is valuable information that shouldn’t be ignored. Interestingly, most of the noise resides in neutral sentiment, not positive or negative. These are comments, articles, and reviews about a brand that don’t provide any real opinion.

This is why it’s important to understand how to filter the data to maintain its quality and relevance.

Not all social media sentiment created equal

Companies need to clearly define their goals before analyzing social media data. There are differing degrees of sentiment, and not all translate equally well. Most sentiment analysis tools begin by separating data into positive and negative groups. Yet even within each fan group there are varying degrees of support for contestants. In trying to determine the number of votes for a contestant, consider this data: “I just voted 100 times for Casey” vs. “My top 3 are Lee, Michael and Casey” vs. retweeting a link to a video or article which mentions Casey.

Companies also need to consider how to weigh one tweet versus a Facebook comment versus a blog post.

The reality is that not all data is needed or equal in weight. For American Idol, votes are cast for the person you want to keep on the show, so negative sentiment has little correlation to who will be voted off. This requires factoring out negative comments from total sentiment to get the most accurate prediction. Companies also need to consider how to weigh one tweet versus a Facebook comment versus a blog post. Each is just one piece of data, but does each one count equally?

Don’t look at data in a vacuum

aving knowledge of events and circumstances is critical to understanding and extracting intelligence from social media data. In the case of Reality Buzz, it was helpful to watch the performance shows for added context. This process is key for companies to raise other hypotheses to further investigate after they’ve seen the output.

Similarly, some manual data review is also essential to ensure quality and consistency. For example, when using an automated sentiment analysis tool, companies can weigh keywords differently. In addition, automated tools are not yet capable at distinguishing sentiment as functional, emotional or behavioral. So in monitoring social media data, there had to be a huge difference between “I like my new Canon camera” and “I just told my friend to buy the new Canon camera.” While both positive sentiments, the latter should be weighed much more heavily.
The growing mass of social media data is definitely a treasure trove of insight to extract intelligence, whether predicting reality show winners or moving your business forward. When done correctly, collecting and analyzing social media sentiment can be a pain-free, powerful tool for real-time feedback, predictive analytics and getting the competitive edge you need to win.
Rick Kawamura is Director of Marketing at Kapow Technologies, a leading provider of Web data services. Rick was most recently VP of Marketing at DeNA Global, and previously held strategic and product management roles at Palm and Sun Microsystems. He can be reached at
You may also be interested in:

Tuesday, June 1, 2010

Ariba, IBM deal shows emerging prominence of cloud ecosystem-based collaboration and commerce

The more you delve into how cloud computing can reshape business, the more clear becomes the importance of ecosystems.

No one cloud provider is likely to forecast and deliver all that any business needs or wants. More importantly, the role of the cloud provider is less about providing complete services than in enabling the ease and adaptability of acquiring, delivering and monetizing a variety of services in dynamic combination.

We're now seeing that the marketplace of cloud-hosted APIs is rich and exploding. But it's a self-service, organic market model that's emerging-- not a top-down ERP-like affair. And that is likely to make all the difference in terms of fast adoption.

Do providers like Apple, Google and Amazon produce the lion's share of services themselves -- or do they provide a fertile garden in which others create services and APIs that make the garden most valuable to all participants, inviting more guests, more development, more collaboration?

The organic model is also likely to repeat in ecosystems that allow buyers and sellers to align, and business processes between and among them to flourish. The business-to-business (B2B) commerce cloud is now being built. Recent acquisitions, like IBM's buy of Cast Iron and intent to buy Sterling Commerce, point up the "business garden" goals of Big Blue. Cast Iron allows the cultivation of hybrid clouds, clouds of clouds and rich services integration. Sterling brings EDI-based networks into the fold.

IBM clearly likes the idea of playing match-maker between traditional and new business models.

IBM clearly likes the idea of playing match-maker between traditional and new business models. And this cloud garden party effect aligns perfectly with IBM's tendency to avoid providing packaged business applications in favor of the platforms, middleware, process enablement and collaboration capabilities that support others' discrete applications.

Last week's announcement then of a cloud collaboration partnership between IBM and Ariba furthers the emerging prominence of cloud commerce ecosystems. To encourage more ecommerce, the IBM-Ariba deal matches B2B buyers and sellers via LotusLive collaboration and social networking services, all through cloud delivery models.

Conference capstone

The announcement came as a capstone to the Ariba Live 2010 conference in Orlando. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.] I had fun at the conference spouting off on cloud benefits, and tweeting up some of the mainstage events under #AribaLive.

Ariba plans to integrate its Ariba Commerce Cloud with IBM LotusLive to help buyers and sellers communicate and share information more fluidly and effectively, leading to faster, more confident business decisions, the companies said. Ariba plans to integrate IBM’s LotusLive with Ariba Discovery, a web-based service that helps buyers and sellers find each other quickly and automatically helps match buyers’ requirements to seller capabilities.

Both Ariba and IBM are recognizing the power and huge opportunity of being at the center of cloud-based commerce. And being at the center means allowing the participants to do the actual driving, to enable the community to seek and find natural partners via social interactions. We're likely to see the equivalent of app stores and social networks well up for B2B commerce, scaling both down and up, in the coming months and years.

What's now good for consumer commerce is soon to be good for the business side of the equation. It's simply the most efficient.

“The successful combination of LotusLive and the Ariba Commerce Cloud will provide such a matchmaking comfort zone in which networks of partners, suppliers and customers can easily work together across company boundaries to help do their jobs more efficiently and cost-effectively, and perhaps even develop lasting relationships," said Sean Poulley, Vice President, IBM Cloud Collaboration, in a release.

As Ariba Chairman and CEO Bob Calderoni says, what's now good for consumer commerce is soon to be good for the business side of the equation. It's simply the most efficient.

After IBM set its sights on Sterling, I at first wondered if IBM and Ariba might find themselves competing. But last Wednesday's deal shows that ecosystems rule. One-in-all cloud provider aspirants should take note. The way to making the network most valuable is by empowering the business (both sellers and buyers) to carve out what they want to do themselves.

IBM Lotus collaboration services plus Ariba's cloud and commerce network services seem to be striving to reach the right balance between providing a fertile arena and then getting out of the gardeners' way.

You may also be interested in:

With eye on cloud standard, Apprenda offers free downloadable version of SaaS application server

Apprenda, a software as a service (SaaS) middleware provider, is now offering a free downloadable SaaS stack that provides much of the functionality of its flagship SaaSGrid, an application server for on-demand apps.

The Clifton Park, NY company says that SaaSGrid Express, announced today, provides free access to a foundation with which to deliver .NET applications as mature SaaS offerings with a competitive cost-of-delivery profile.

The model appears simple. Use community and ISV economic imperatives to drive a de facto standard into the market, thereby seeding a strong business to the upgrade-path commercial version. The timing is great, as finding a way to enjoy cloud computing scale and economics is top of mind for ISVs and early-adopter enterprises.

All of the main features and functions of SaaSGrid application server have been ported to the self-installable Express edition. Apprenda says their product has drastically reduced time-to-market and capital requirements for independent software vendors (ISVs), SMBs and enterprises, with some customers experiencing 50 to 70 percent reductions in planned engineering time and associated costs.

I like to think of this as allowing for SaaS "stack bursting," which could easily augment cloud-bursting efforts, seeing as users of Apprenda SaaSGrid can run on Amazon EC2 as well as for on-premises virtualized workloads. The model might also work really well in migration efforts, of going from legacy apps to web and ultimately cloud deployments.

According to Apprenda CEO Sinclair Schuller some 93 percent of ISV-delivered applications have yet to make the transition to on-demand and SaaS-delivered. That's a lot of apps.

In addition to the complex architectural foundation afforded by SaaSGrid Express -- such as low effort multi-tenancy and resilient grid-scalability -- the product also provides some of the other “out of the box” application services found in the full-fledged commercial SaaSGrid offering, including:
  • Metering
  • Monetization
  • Subscription Management
  • Application lifecycle management
  • Cloud control
  • Billing
  • Customer provisioning
This new edition enables developers to quickly build, deploy and onboard customers to their .NET applications, letting them build significant revenue streams without any license costs.

Full-licensed version

ISVs leveraging the full-production-licensed SaaSGrid edition pay a per-server, per-month license fee, and benefit from full customization and branding capabilities, as well as additional features. This licensing model, known as the SaaSGrid Monthly Server License, includes free access to maintenance and software upgrades and comprehensive customer service (based upon the number of licensed servers).

In April, Gartner analyst Yefim Natis profiled Apprenda as one of the "Cool Vendors" in the application platform as a service (APaaS) space. He said:
Apprenda's support of ISV (or private cloud application project) requirements for developing and running SaaS-style offerings far exceed the functionality available with the recently delivered Windows Azure SDK. Apprenda's advanced support for fine-grained and adjustable use tracking and billing is particularly valuable for ISVs.

Apprenda achieves this breadth of capabilities largely unintrusively, in part by intercepting and extending the Web and the database communications of the application and in part by modifying the compiled application intermediate language code (adding value and some overhead in the process).
However, Natis does offer one caution:
Apprenda's current business is primarily focused on ISVs. Historically, the business opportunities of middleware providers servicing the ISV market have been limited. With time, the company must develop a product offering that targets enterprise IT cloud application projects as well in order to expand its business opportunities.
If the ISVs' needs and enterprise app migration efforts alone jump-start adoption of SaaSGrid Express it could make for a strong and clear path to clouds, from Amazon to Azure to the home-grown variety at an enterprise near you.

For more information on SaaSGrid Express and how to download it, visit the Apprenda web site:

You may also be interested in:

Monday, May 31, 2010

BMC Software rolls out cloud-focused lifecycle management solution

Cloud computing is more than just a buzzword, yet it’s not quite mainstream. With its just-released Cloud Lifecycle Management offering, BMC Software is one of the companies working to bridge that gap and deliver on the promise of the managed cloud.

Cloud Lifecycle Management aims to help IT admins deliver and integrate cloud computing strategies more efficiently. It’s an IT management platform that promises more control and visibility in the cloud. And it’s seeing traction among some of the biggest names in high-tech, including Cisco, Dell, Fujitsu, NetApp, EMA, Red Hat and Blackbaud. Altogether, Houston-based BMC has inked more than $100 million in cloud foundation technology deals so far, the company says.

Next-gen cloud services

BMC Cloud Lifecycle Management not only aims to help enterprises build and operate private clouds more efficiently, it also offers opportunities to leverage external public cloud resources and makes way for service providers to develop and deliver cloud services. With so many tech industry heavy-hitters as partners and customers, it’s worth a closer look.

Here’s what BMC Cloud Lifecycle Management includes: a policy-driven service catalog that personalizes the list of available service offerings and customizations based on a user's role, a self-service Web portal for requesting and controlling private and public cloud resources, and dynamic provisioning of the entire service stack across heterogeneous infrastructures. The platform also offers out-of-the-box cloud management workflows that automate the assignment of compliance policies and performance monitoring tools, along with pre-built IT service management (ITSM) integration for ITIL process interaction and compliance.

eWeek notes that BMC Software was an original partner of Cisco System in the Unified Computing System initiative in 2009. eWeek continues:
In the original UCS partnership scheme, BMC provided the provisioning, change management and configuration software in the stack. Cisco, of course, provided the networking and a new central server.

EMC and NetApp provided the storage capacity, VMware and Microsoft added their virtualization layers—depending upon the choice of the customer—and Accenture shaped the individual product deployments for customers.

Since then, UCS has added vBlocks, smaller modules of some of the aforementioned components, which can be integrated on a smaller scale and are not as daunting as a full-blown forklift overhaul to existing midrange and enterprise IT systems. vBlocks, too, include BMC middleware.

During all this iteration, BMC has been taking copious notes and has come up with its own new cloud-computing layer, BMC Cloud Lifecycle Management, which works with just about all data center-system-maker components, not just the UCS.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at and

You may also be interested in:

Apptio launches demand-based forecasting for IT budget and spend management

Apptio is betting big on the market for demand-based budget forecasting. A new feature in its technology business management solutions software suite aims to help business managers plan and budget more accurately by inputting departmental forecasts into its software.

The Bellevue, Wash., company is calling it a “closed-loop” approach to financial planning, cost management, and transparency. The promised result: tighter alignment with business priorities, improved cost efficiency, and transparent reporting on the cost and value of IT services.

Michel Feaster, vice president of products at Apptio, is convinced the company’s closed-loop financial planning process will “close the gap between IT and the business” by letting companies update budgets and forecasts based on real business priorities.

“Demand-based forecasting gives IT the data it needs to respond more effectively, and plan accordingly with minimal variance so they aren’t over- or under-committing resources,” Feaster said.

Budgeting and planning = painful and inaccurate

Indeed, Apptio’s latest feature intends to remedy a notoriously painful and inaccurate IT budgeting and planning process. It was General Electric CEO Jack Welch who once said, “The budgeting process at most companies has to be the most ineffective practice in management. It sucks the energy, time, fun, and big dreams out of an organization. It hides opportunity and stunts growth.”

The budgeting process . . . hides opportunity and stunts growth.

Apptio’s demand-based forecasting works on the premise that past performance is not an indicator of future trends. Many variables can change and those changes can make a ripple effect across the organization’s IT services needs. In essence, Apptio’s demand-based forecasting is applying best practices from the supply chain management world to IT budgeting and planning.

Companies like Starbucks, Cisco, and Volkswagen are reporting savings with Apptio solutions to determine how changes in key business drivers affect IT services. In fact, Starbucks has seen $1.4 million in savings in nine months while Volkswagen reports a 50 percent reduction in annual budgeting costs through Apptio’s automation. Apptio believes the new demand-based forecasting will drive even stronger returns.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at and
You may also be interested in: