Sunday, March 29, 2009

HP advises strategic view of virtualization to dramatically cut IT costs, gain efficiency and usher in cloud benefits

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion. Access more HP resources on virtualization.

Virtualization has become imperative to enterprises and service providers as they seek to better manage IT resources, cut total costs, reduce energy use, and improve data center agility.

But virtualization is more than just installing hypervisors. The effects and impacts of virtualization cut across many aspects of IT operations. The complexity of managing virtualization IT runtime environments can easily slip out of control.

A comprehensive level of planning and management, however, can assure a substantive economic return on virtualization investments. The proper goal then is to do virtualization right -- to be able to scale the use of virtualization in terms of numbers of instances elastically while automating management and reducing risks.

To gain the full economic benefits, IT managers also must extend virtualization from hardware to infrastructure, data, and application support -- all with security, control, visibility, and compliance baked in.

What's more, implementing virtualization at the strategic level with best practices ushers in the ability to leverage service oriented architecture (SOA), enjoy data center consolidation, and explore cloud computing benefits.

To learn more about how virtualization can be adopted rapidly with low risk using sufficient governance, I recently interviewed Bob Meyer, the worldwide virtualization lead in HPs' Technology Solutions Group.

Here are some excerpts:
For the last couple of years, people have realized the value of virtualization in terms of how it can help consolidate servers, or how it can help do such things as backup and recovery faster. But, now with the economy taking a turn for the worse, anyone who was on the fence, who wasn’t sure, who didn’t have a lot of experience with it, is now rushing headlong into virtualization.

They realize that it touches so many areas of their IT budget, it just seems to be a logical thing to do in order for them to survive these economic times and come out a leaner, more efficient IT organization. ... It’s gone to virtualization everywhere, for everything -- "How much can I put in and how fast can I put it in." ... Everybody will have a mix of virtual and physical environments.

We're not just talking about virtualization of servers. We're talking about virtualizing your infrastructure -- servers, storage, network, and even clients on the desktop. People talk about going headlong into virtualization. It has the potential to change everything within IT and the way IT provides services.

Throughout the data center, virtualization is one of those key technologies that help you get to that next generation of the consolidated data center. If you just look at from a consolidation standpoint, a couple of years ago, people were happy to be consolidating five servers into one or six servers into one. When you get this right, do it on the right hardware with the right services setup, 32 to 1 is not uncommon -- a 32-to-1 consolidation rate.

Yet the business can be affected negatively, if the virtualized infrastructure is managed incompletely or managed outside the norms that you have set up for best practices. One of the blessings of virtualization is its speed. That’s also a curse in this case, because in traditional IT environments, you set up things like a change advisory board and, if you did a change to a server, if you moved it, if you had to move to a new network segment, or if you had to change storage, you would put it through a change advisory board. There were procedures and processes that people followed and received approvals.

In virtualization, because it’s so easy to move things around and it can be done so quickly, the tendency is for people to say, "Okay, I'm going to ignore that best practice, that governance, and I am going to just do what I do best, which is move the server around quickly and move the storage around." That’s starting to cause all sorts of IT issues.

Initial virtualization projects probably get handled with improper procedures. ... Just putting a hypervisor on a machine doesn’t necessarily get you virtualization returns.

You have to start asking, "Do I have the right solutions in place from an infrastructure perspective, from a management perspective, and from a process perspective to accommodate both environments?"

The danger is having parallel management structures within IT [with a separate one for virtualized resources]. It does no one any good. If you look at it as a means to an end, which virtualization is, the end of all this is more agile and cost-effective services and more agile and cost-effective use of infrastructure.

Virtualization really does touch everything that you do, and that everything is not just from a hardware perspective. It not only touches the server itself or the links between the server, the storage, and the network, but it also touches the management infrastructure and the client infrastructure.

What we intend to do is take that hypervisor and make sure that it's part of a well-managed infrastructure, a well-managed service, well-managed desktops, and bringing virtualization into the IT ecosystem, making it part of your day-to-day management fabric.

The focus right now is, "How does it save me money?" But, the longer-term benefit, the added benefit, is that, at some point the economy will turn better, as it always does. That will allow you to expand your services and really look at some of the newer ways to offer services. We mentioned cloud computing before. It will be about coming out of this downturn more agile, more adaptable, and more optimized.

No matter where your services are going -- whether you're going to look at cloud computing or enacting SOA now or in the near future -- virtualization has that longer term benefit of saying, "It helps me now, but it really sets me up for success later."

We fundamentally believe, and CIOs have told us a number of times that virtualization will set them up for long-term success. They believe it’s one of those fundamental technologies that will separate their company as winners going into any economic upturn.
Read a full transcript of the discussion. Access more HP resources on virtualization.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Wednesday, March 25, 2009

Eclipse Swordfish OSGi ESB enters fray for SOA market acceptance, Sopera to add support

The Eclipse Foundation's news that the first release of Swordfish enterprise service bus (ESB) in early April hasn't exactly set the blogosphere on fire. Reaction to the open-source ESB so far has ranged from ho-hum to mild skepticism.

There are, after all, several open source ESBs in play, from Mule to Apache ServiceMix and Synapse to PEtALS.

On the other hand, it could just be that the rest of the bloggers are working finding just the right fishing metaphor to use for new ESBs, something that seems to be a requirement when writing about Swordfish.

How about, "there's a deep and wide ocean of opportunity for an open source ESBs, and an ability to federate them might provide yet more fish to fry." Sorry.

Eclipse made the announcement Monday at Eclipsecon 2009. Swordfish, which is described as a next-generation ESB, aims at providing the flexibility and extensibility for deploying a service-oriented architecture (SOA) strategy. Based on the OSGi standard, the new ESB builds upon such successful open-source projects as Eclipse Equinox and Apache ServiceMix.

Among the features highlighted in Swordfish are:
  • Support for distributed deployment, which results in more scalable and reliable application deployments by removing a central coordinating server.

  • A runtime service registry that allows services to be loosely coupled, making it easier to change and update different parts of a deployed application. The registry uses policies to match service consumers and service providers based on their capabilities and requirements.

  • An extensible monitoring framework to manage events that allow for detailed tracking of how messages are processed. These events can be stored for trend analysis and reporting, or integrated into a complex event processing (CEP) system.

  • A remote configuration agent that makes it possible to configure a large number of distributed servers from a central configuration repository without the need to touch individual installed instances.
Austin Modine at The Register sees the move putting Eclipse up against some software powerhouses and is taking a wait and see attitude:
Eclipse's jump into runtime puts the foundation into more direct competition with companies like Oracle, IBM and Microsoft, as well as a multitude of smaller providers. Eclipse already shook up the development tools market by offering a free and open source toolset — can Eclipse pull off the same with SOA?
Steve Craggs at Lustratus Research takes a glummer view:
So, will Swordfish make a successful strike at the ESB market? So far, open source ESB projects have not had a great deal of success, and as far as 2009 goes Lustratus has forecast that open source projects will suffer due to the lack of the necessary people resources to turn open source frameworks into a useful user implementation. However, Swordfish has the backing of the influential Eclipse organization, which has done a lot to standardize the look and feel of many software infrastructure tools.

Looking at the initial bites on Swordfish, the market needs to be baited a bit.

And, of course there's more to market acceptance than just the code drop. Also this week, German start-up and Deutsche Post AG spin-off Sopera GmbH announced plans to support Swordfish as part of a comprehensive SOA platform.

Sopera helped develop and refine Swordfish at Deutsche Post before helping to bring the project to fruition in Eclipse.

Using the Eclipse Swordfish (SOA Runtime Framework) and the SOA Tooling Platform (STP), Sopera now plans to further deliver a new service registry/repository, integrate process orchestration engines, and provide integration between the OSGi components -- all to create the SOA solution.

As I said to Ricco Deutscher, Sopera's CTO, managing director and co-founder, when briefed: "In today's economic climate, there is definite opportunity for open source SOA. Plus, we see emerging requirements for modern middleware that includes SOA, and helps prepare for cloud-based applications."

There should be a signifiant degree of pull for strong SOA offerings built of open source components, but with value-add of integration and associated support services. The market for the de facto on-premises cloud architecture and implementation is wide open. There's no reason that open source SOA implentations won't be a major portion of quite a few clouds.

Low-cost open spurce solutions -- coupled with the proper balance of completeness and flexibility -- may gain a surer foothold now, given the economy, than in the past. Deutscher says Sopera is seeking to attain and deliver on the right balance at he right price.

The ambition is certainly there. Last month, Sopera joined forces with Microsoft and Open-Xchange under the Open Source Business Foundation (OSBF), a non-profit European open source business network, to announce a platform that leverages SOA for cloud computing.

This "Internet Service Bus (ISB)" will create a bridge between Java and .NET software applications and promote seamless interoperability. I'm all for that, long as it's a fully bi-directional bridge.

The first release of Swordfish 0.8 will be available for download the first week of April from www.eclipse.org/swordfish/. Sopera will be delivering solutions around it and then added SOA and cloud solutions over the next two years.

SaaS provider Workday extends HR business processes to iPhone, mobile tier

Managing your workforce processes and workflow is now no further away than your iPhone. Workday, the software-as-a-service (SaaS) human capital management provider, this week announced new mobile capabilities, with an iPhone application that allows users to search worker directories and complete basic management tasks.

The Pleasanton, Calif. company also announced that it plans BlackBerry support for later this year, with other mobile tier support in the works. [Disclosure: Workday is a sponsor of BriefingsDirect podcasts.]

With the new mobile application users can review, approve, deny, and revise tasks from the Workday workforce management applications, and view the status of ongoing business processes. They can also search for worker contact information and phone or email directly from the smartphone.

The streamlined mobile application includes password and role-based security, along with time-out and sign-out preferences aligned with the user's security requirements.

The mixture of SaaS-based business applications as full web apps or mobile apps that allow the interfacing with the processes is the way of the future. Look at how many of us do email now ... we check on urgent and time-sensitive communications using iPhones and the like. Then we do more asynchronous and "fatter" processes on the PC.

For busy managers flooded with expense report sign-offs and hiring and firing minutiae, the mobile access can be a godsend. It prevents slowed up processes, and also prevents the overloaded email in-box. Making SaaS apps reach the mobile tier -- but in an appropriate way that recognizes the use requirements for "on the go" work -- will become a general business requirement, I expect.

The mobile solutions are built with a platform-independent core. Workday plans to continue updating its capabilities to allow all devices to take advantage of the solution, either through platform-specific client applications or mobile HTML.

The new application will be available from the iTunes App Store. Workday is hosting a webinar on the new mobile capabilities on March 25. Anyone interested can register at http://www.workday.com/update7. More information on the iPhone app is available at http://www.workday.com/mobile.
.

Sunday, March 22, 2009

BriefingsDirect Analysts list top 5 ways to cut enterprise IT costs during economic downturn

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Doing more for less in IT? Sure, easier said than done. But who said it couldn't be done?

We took the question of how to cut information technology (IT) costs in the downturn to five analysts and consultants, who can both say and do. The result is the latest BriefingsDirect Analyst Insights Edition, Vol. 38, a periodic discussion and dissection of IT infrastructure related news and events.

In this episode, recorded March 13, 2009, our analyst guests make their top five recommendations for cutting enterprise IT costs amid the economic downturn. How does IT adapt and adjust to the downturn? Is IT to play a defensive role in helping to slash costs and reduce its own financial burden on the enterprise?

Or, does IT help most on the offensive, in transforming businesses, or playing a larger role in support of business goals, with the larger IT budget and responsibility to go along with that? Can IT lead the way on how companies remake themselves and reinvent themselves during and after such an economic tumult?

Or is IT good both for economic offense and defense, and therefore the indispensable business function?

We ask our panel to list the top five ways that IT can help reduce costs, while retaining full business -- or perhaps even additional business -- functionality.

Please join noted IT industry analysts and experts Joe McKendrick, independent IT analyst and prolific blogger; Brad Shimmin, principal analyst at Current Analysis; JP Morgenthal, independent analyst and IT consultant, and Dave Kelly, founder and president of Upside Research. Our discussion is hosted and moderated by me, BriefingDirect's Dana Gardner. I also offer my 5 cents on the topic.

So here are the lists ... (Read a full transcript of the discussion.)

McKendrick's Top Five Recommendations

1) SOA remains viable: Service oriented architecture (SOA) is alive, well and thriving. SOA solutions promote reuse and developer productivity. SOA also provides a way to avoid major upgrades, or helps with additional major initiatives in enterprise systems such as enterprise resource planning (ERP).

2) Virtualize all you can: Virtualization offers a method of application and infrastructure consolidation. You can take all those large server rooms -- and some companies have thousands of servers -- and consolidate into more centralized data centers. Virtualization paves the path to that.

3) Cloud computing: Cloud offers a way to tap into new sources of IT processing, applications, and IT data. Cloud allows IT to pay for such new capabilities incrementally, rather than having to make large capital investments.

4) Open source software: Look to open-source solutions. There are open-source solutions all the way up the IT stack, from the operating system to middleware to applications. Open source provides a way to, if not replace your more commercial proprietary systems, then at least to implement new initiatives and move to new initiatives that fly under the budget radar. You don't need budget approval to establish or begin new initiatives using OSS.

5) Enterprise 2.0: These tools and methods offer an incredible way to collaborate and to tap into the intellectual capital throughout your organization. Enterprise 2.0 offers a way to bring a lot of thinking and a lot of brainpower together to tackle problems.


Shimmin's Top Five Recommendations

1) User-generated IT: Give your users a really wide "pasture." There's an old saying that if you want to mend fewer fences, have a bigger field for your cattle. You can see that in IT with some experiments with BYOC (Bring Your Own Computer) -- programs that folks like Citrix and Microsoft have been engaging in. IT no longer manages the device, just the virtual image that resides on servers and "visits" the client machine. Mobile devices are also ... extending desktops and laptops. You need to have some faith in your users to manage their own environments and to take care of their own equipment, something they're more likely to do when it's their own property, and not the company's.

2) Don't build large software, buy small software: SOA is well-entrenched within enterprise IT, or in clouds. You can buy either software as a service (SaaS) or on-premise software that is open enough that it connects with and works with other software packages. No longer do you need to build an entire monolithic application from the ground-up. An example is PayPal. This is a service, but there are on-premises renditions of this kind of idea that allow you to basically build up a monolithic application without having to build the whole thing yourself. Using pre-built packages, smaller packages that are point solutions like PayPal, which lets you take advantage of their economies of scale, and lets you tread upon the credibility that they've developed, something that's especially good for consumer-facing applications.

3) Build inside but host outside: You shouldn't be afraid to build your own software, but you should be looking to host that software elsewhere. Enterprises, enterprise IT vendors and independent software vendors (ISVs) ... are leaping toward putting their software platforms on top of third-party cloud providers like Amazon EC2. That is the biggest game-changer in everything we've been talking about here. There's a vendor ... and they've been moving toward shutting down the data centers and moving to Amazon's EC2 environment. They went from multi-multi thousand dollar bills every month to literally ... a couple of hundred bucks a month. It was a staggering savings they saw ... because the economies are scaled through that shared environment.

4) Kill your email: Email has seen its day, and it really needs to go away. For every gigabyte you store, I think it's almost $500 per user per year, which is a lot of money. If you're able to, cut that back by encouraging people to use alternatives to email, such as social networking tools. We're talking about IM, chat, project group-sharing spaces, using tools like Yammer inside the enterprise; SharePoint obviously, Clearspace and Google applications. That stuff cuts down on email. ... Look at software or services like Microsoft Business Productivity Online Suite (BPOS). You can get Exchange Online now for something like $5 per user per month. That's pretty affordable. So, if you're going to use email, that's the way to go.

5) Turn off the printers: By employing software like wikis, blogs, and online collaboration tools from companies like Google and Zoho, you can get away from the notion of having to print everything. As we know, a typical organization kills 143 trees a year -- I think was the number I heard, which is a staggering amount of waste. There's a lot of cost to that.

5.5) Walk away from Microsoft Office: It's the big, fat cow that needs to be sacrificed. Paying $500-$800 a year per user for that stuff is quite a bit. The hardware cost is staggering as well, especially if you are upgrading everyone to Vista. If you leave everyone on Windows XP and adopt open-source solutions like OpenOffice and StarOffice, that will go a long, long way toward saving money. Why I'm down on printing is that the time has gone when we had to have really professional, beautiful-looking documents that required a tremendous amount of formatting. Everything needed to be perfect within Microsoft Word, for example. What now counts is the information. It's same for 4,000-odd features in Excel. I'm not sure if any of us here have ever even explored a tenth of those [features].


Morgenthal's Top Five Recommendations

1) Vendor management: Companies mismanage their vendor relationships. There is a lot of money in there, especially on the IT side -- for telecom, software, and hardware. Get control over your vendor relationships. Stop letting these vendors run around, convincing end-users throughout your business that they should move in a particular direction or use a particular product. Force them to go through a set of gatekeepers, and manage the access and the information they're bringing into the business. Make sure that [buying decisions] goes through an enterprise IT architecture group.

2) Outsourcing: With regard to outsourcing noncritical functions, I'll give you a great example where we combined an outsourced noncritical function with vendor management in a telecom company. Many companies have negotiated and managed their own Internet and telco communications facilities and capability. Today, there are so many more options for that. It's a very complex area to navigate, and you should either hire an expert consultant ... to help you negotiate. Or you should ... take on as much bandwidth as you need on average, and when you need excess bandwidth ... go to the cloud for that additional bandwidth.

3) Utilization analysis: Many organizations don't have a good grasp on how much of their CPU, network, and bandwidth is actually utilized. There's a lot of open capacity in that [poor] utilization, and it allows for compression. In compressing that utilization, you get back some overhead associated with that. That's a direct cost savings.

4) Data quality: I've been trying to tell corporations for years that this is coming. When things are good, they've been able to push off the poor data quality issue, because they can rectify the situation by throwing bodies at it. But now they can't afford those bodies anymore. So now they have bad data, and they don't have the bodies to fix up the data on the front end. ... Invest the money, set it aside, get the data quality up and operate more effectively without requiring extra labor on the front end to clean up the data.

5) Desktop alternatives: It's a great time to explore desktop alternatives, because Windows and the desktop has been a de-facto standard. It's a great way to go -- when things are good. When you're trying to cut another half million, million, or two million dollars out of your budget, all those licenses, all that desktop support, start to add up. They're small nickels and dimes that add up. By looking at desktop alternatives, you may be able to find some solutions. A significant part of your workforce doesn't need all that capability and power [on the desktop]. You can then look for different solutions, like light-weight Linux or Ubuntu-type environments that provide just Web browsing and email, and maybe OpenOffice for some light-weight word processing. For a portion of your user base, it's all they need.


Kelly's Top Five Recommendations

1) Optimize, optimize, optimize: All organizations, both on the business side and the IT side, are going to be doing more with less. ... That makes a great opportunity to step back and look at specific systems and business processes. At the high level, go through business process management (BPM)-type optimization and look at the business processes. Also look at things like data center optimization ... save money and defer capital investment. Increase utilization of storage systems. ... You have all this redundant data out there. There are products from Symantec and other vendors that allow you to "de-duplicate" email systems and existing data. There are ways to reduce your backup footprint. Do single-instance archiving and data compression. ... Just look at existing processes and say, "How can I do individual components more efficiently." Look at specific automated tasks and see how you can do more with less in those tasks.

2) Don't forget the people: The most effective way to have an efficient IT organization is to have effective people in that IT organization. As an IT manager, one thing you need to do is make sure that your people are empowered to feel good about where they're at. They should not hunker down and go into a siege mentality during these difficult times, even if the budgets are getting cut and there's less opportunity for new systems or technology. They need to redirect that stress to discover how the IT organization can benefit the business. You want to help motivate people through the crisis and work on a roadmap for better days ... . Provide a positive direction to use their energy and resources.

3) Re-evaluate commercial software use: You may have investments in Oracle, IBM, or other platforms, and there may be opportunities to use "free" products that are bundled in those platforms that you may not be using. Oracle, for example, bundles Application Express, a rapid application development (RAD) tool, as part of the database. I know of organizations that are using it to develop new applications. Instead of hiring consultants or staffing up, they're using existing people to use this free RAD tool to develop departmental applications or enterprise applications.

4) Go green: Now is a great time to look at energy sustainability programs and try to analyze them in the context of your IT organization. Going green not only helps the environment, but it has a big impact, as you're looking at power usage in your data center with cooling and air conditioning cost. You can save money right there in the IT budget and other budgets by going to virtualization and consolidating servers. Cutting any of those costs can also prevent future investment capital expenditures, too. Look at how you're utilizing the different resources and how you can potentially cut your server and energy costs.

5) Go to lunch: It's good to escape stressful environments ... IT can take the business stakeholders out to lunch, and take a step back and reevaluate priorities. Clear the decks and re-align priorities to the new economic landscape. This may be a time to re-evaluate the priorities of IT projects, re-examine those projects, and determine which ones are most critical. You may be able to prioritize projects anew, slow some down, delay deployments or reduce service levels. The end effect here allows you to focus on the most business-critical operations, applications and services.


Gardner's Top Five Recommendations

1) Harsh triage: Go in and kill the waste by selectively dumping the old that doesn't work. IT needs to identify the applications that aren't in vigorous use, or aren't adding value. They should either kill them outright or modernize them. Extract the business logic and use it in a process, but no longer at the cost of supporting the entire stack or server below each application. IT needs to identify the energy hogs and the maintenance black holes. Outdated hardware robs from the future in order to pay for a diminishing return on the past. Look for the low-lying fruit and the obvious wasteful expenditures and practices. Reduce the number of development environments. Look at something like Eclipse, Microsoft, or OSGi and work toward more standardization around a handful of major development environments. Replace costly IT with outside services and alternatives for your email, calendar, word processing, and baseline productivity applications. Put an emphasis on self-help. Empower the users. That means more use of SaaS and on-demand applications. It's really about acting like a startup. You want to have low capital expenditures. You want to have low recurring costs. You want to be flexible.

2) Build a cloud computing skunkworks: Create a parallel IT function that leverages cloud attributes. Focus on the value of virtualization. That means looking to standardized hardware on-premises, and using grid, cloud, and modernized and consolidated data center utility best practices. More use of appliances, too, and looking at open-source software anew makes sense. This is another way of saying do SOA using cloud and compute fabric alternatives. Also look at outside offerings for where other people have created cloud environments that are very efficient for baseline functions that don't differentiate,and for new greenfield applications.

3) Reduce client costs: It's time to simplify and mobilize the client tier. You can use mobile devices, netbooks, and smart phones to do more activities, to connect to back-end data and application sets and Web applications. It's time to stop spending money on the fat client. Spend it more on the lean server, and get a higher return on that investment. That includes the use of virtual desktop infrastructure (VDI) and desktop-as-a-service (DaaS) types of activities. It means exploring Linux as an operating environment on the desktop, where that makes sense. Look at what the end users are actually doing with these clients. Find workers that can exist using only browsers, and give them either low-cost hardware or deliver that browser as a virtualized application through VDI on a thin client. Centralize more IT support, security, and governance at the data center. Reduce the number of data centers. Use acceleration, remote branch technologies, and virtual private networks (VPNs) to deliver web applications and VDI clients across wide area networks. Act like a modern startup ... build the company based on what your needs are now, not on what IT was doing 15 years ago.

4) BI everywhere: Mine the value of all the data you can. This includes business intelligence (BI) internal to IT, such as server and network equipment log files. Know what the world is doing around you, and what your supply chain is up to, too. It's time to join more types of data into your BI activities, not just your internal relational data. You might be able to actually rent data from a supplier, a partner or a third-party. Bring that third-party data in, do a join, do your analysis, and then walk away. Then, maybe do it again in six months. It's time to think about BI as leveraging IT to gain the analysis and insights, but looking in all directions -- internal, external, and within IT. Also use BI across extended enterprise processes. It's also good to start considering tapping social networks for their data, user graph data, and consumer preferences metadata, and using that as well for analysis as well. There are more and more people putting more and more information about themselves, their activities, and their preferences into these social networks.

5) Elevate IT to the board level: The IT executive should be at the highest level of the business decisions in terms of direction, strategy and execution. The best way for IT to help companies is to know what those companies are facing strategically as soon as they're facing it, and to bring IT-based solutions knowledge to the rest of the board ASAP. IT can be used much more strategically at the board level. That way IT can be used for transformation and problem-solving at the innovation and business-strategy levels, not as an afterthought, not just as a means to an end -- but actually factoring what the end can be and what can be accomplished. That is, again, acting more like a startup. If you talk to any startup company, they see IT as an important aspect of how they are going to create original new value, of how to get to market cheaply, and how to behave as an agile entity on an ongoing and continuous basis.

Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Webinar: Modernization pulls new value from legacy and client-server enterprise applications

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Nexaweb Technologies.

Read a full transcript of the discussion.

Welcome to a special BriefingsDirect presentation, a podcast created from a recent Nexaweb Technologies Webinar on application modernization. Learn how enterprises are gaining economic and productivity advantages from modernizing legacy and older client-server applications.

The logic, data, and integration patterns' value within these older applications can be effectively extracted and repurposed using a variety of tools and methods. That means the IT and business value from these assets can be reestablished as Web applications on highly efficient platforms, and out to mobile devices, too.

Examine here how a number of companies have attained new value from legacy and client-server applications, while making those assets more easily deployed as rich, agile Web applications and services. Those services can then also be better extended across modern and flexible business processes, as part of service oriented architectures (SOAs).

In the podcast, hear from Dana Gardner, principal analyst at Interarbor Solutions; David McFarlane, COO at Nexaweb, and Adam Markey, solution architect at Nexaweb.

Read a full transcript
of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Nexaweb Technologies.

Friday, March 20, 2009

If you’re an enterprise, developer or economist, IBM is not the right buyer for Sun

From the perspective of IT users, developer communities and global industry as a whole, IBM may be the worst place for beleaguered Sun Microsystems to land.

Sure a merger as is rumored is good -- but not urgently or obviously so -- for IBM. Big Blue gains modest improvement in share of some servers, mostly Unix-based. It would actually gain just enough share of high-end servers to justly draw anti-trust scrutiny nearly worldwide.

Yet these types of servers are not today's growth engines for IT vendors, they are the blunt trailing edge. Users have been dumping them in droves, with their sights set on far lower-cost alternatives and newer utility models of deployment and payment. IBM may want the next generation of data centers to be built of mainframes, but not too many others do.

In any event, server hardware is not a meaningful differentiator in today’s IT markets. Sun, if anyone, has proven that. IBM to claim it as the rationale for the buyout is fishy. A lot of other analysts are holding their noses too. UPDATE: Good analysis from Redmonk's Stephen O'Grady.

The rumored IBM-Sun deal for $6.4 billion is incremental improvement for IBM on several fronts: open source software (low earnings), tape storage (modest albeit dependable revenue), Java (already mostly open), engineering talent (easier to get these days given Sun layoffs), new intellectual property (targeted by design by Sun on undercutting IBM's cash cows). In short, there are no obvious game changers or compelling synergies in IBM buying Sun other than setting the sun on Sun.

I initially thought the rumored deal, which drove up Sun's stock, JAVA, by nearly 80 percent on rumor day one, didn't make sense. But it does make sense. Unfortunately it only makes sense for IBM in a fairly ugly way. As Tom Foremski said, it smacks of a spoiler role.

If IBM, would you spend what may end up being $4 billion in actual cost to slow or stifle the deterioration of a $100 billion data center market, and, at the same time, take the means of accelerating the move to cloud computing off the table from your competitors? As Mister Rogers would say, "Sure, sure you would."

Most likely, though the denials are in the works, IBM will plunder and snuff, plunder and snuff its way across the Sun portfolio -- from large account to large account, developer community to developer community, employee project to project. The tidy market share and technology gems will be absorbed quietly, the rest canceled or allowed to wither on the vine.

Certain open source communities and projects that Sun has fostered will be cultivated, or not. IBM is the very best at knowing how to play the open source cards, and that does not mean playing them all.

Listen, this would be a VERY different acquisition than any IBM has done in recent memory. It’s really about taking a major competitor out when they are down. It’s bold and aggressive, and it’s ignoble. But these are hard times and many people are distracted.

The deal is not good for Sun and it's customers (unless they already decided to move from being a Sun shop to an IBM shop), and may put in jeopardy the momentum of open source use up into middleware, SOA, databases and cloud infrastructure. That’s because, even at the price of $6.4 billion (twice Sun's market value before the deal talk), IBM will gain far more from the deal over the long term by eradicating Sun than by joining Sun's vector.

This deal is all about control. Control of Java, of markets, developers, cost of IT -- even about the very pace of change across the industry. For much of it's history IBM has had its hand on the tiller of the IT progression. It's was a comfortable position except for an historically exceptional past 17 years for IBM. It's time to get back in the saddle.

Clearly, Sun has little choice in the matter, other than to jockey for the best price and perhaps some near-term concessions for its employees. It's freaking yard sale. Sun is being run by -- gasp -- investment bankers. Here's a rare bonus bonanza in a M&A desert, for sure.

But let's be clear, this is no merger of partners or equals. This is assimilation. It’s Borg-like, and resistance may be futile. It is important to know when you're being assimilated, however.

Scott McNealy, Sun’s chairman, former CEO and co-founder, famously called the 2001 proposed merger of HP and Compaq a collision between two "garbage trucks." Well, IBM’s proposed/rumored purchase of Sun is equivalent to a garbage truck being airlifted out of sight and over the horizon by a C-17 cargo transport plane. Just open the door and drive it in. The plane was probably designed on Sun hardware, too. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Sun’s fate has been shaky for a long time now. The reasons are fodder for Harvard case studies.

But what of the general good of enterprise IT departments, of communities of idealistic developers, or of open and robust competition in the new age of cloud computing? In the new age, incidentally, you may no longer need an army of consultants and C-17 full of hardware and software at each and every enterprise. As Nick Carr correctly points out, this changes everything. That kind of change may not be what IBM has in mind.

It’s not easy resting having IBM in control of a vast portions of the open source future, and the legacy installed past. Linux and Apache Web servers might have made sense for IBM, but do open source cloud databases, middleware, SOA, and the next generations of on- and off-premises utility and virtualization fabric infrastructure?

IBM today is making the lion's share of its earnings from the software and services that run yesterday's data centers. Even the professional services around the newer cloud models (and subscription fees of actual, not low-utilization, use) does not make up for lost software license revenues. In many ways, cloud is more a threat than an opportunity to Big Blue. It ultimately means lower revenues, lower margins, less control, and feisty competitors that make money from ads and productivity, not sales and service.

Cloud models will take a long time to become common and mainstream, but any sense of inevitability must make IBM (and others) nervous. Controlling the pace of the change is essential.

The hastening shift to virtualization, application modernization, SaaS, mobile, cloud, and increased use of open source for legacy infrastructure could seriously disrupt the business models of IBM, HP, Cisco, Microsoft, Oracle and others. Moving from legacy-and-license to cloud-and-subscription (on OSS or commercial code) poses a huge risk to IBM, especially if it happens fast -- something this unexpected economic crisis could accelerate.

Enterprises could soon gain the equivalent of the powerful and efficient IT engines that run a Google or Amazon, either for itself, or rented off the wire, or both. IBM probably won't have 60 percent of the cloud services market in five years like it does the high-end Unix market (if it gets Sun). In fact, what has happened to Sun in terms of disruption may be a harbinger of could happen to IBM during the next red-shift in the market.

Sun should have gotten to these compelling cloud values first, made a business of it before Amazon. Sun was on the way, had the vision, but ran out of time and out of gas.

Sun has let a lot of us down by letting it come to this. The private equity firms that control Sun now don't give a crap about open source, or innovation, clouds or whether the network is the computer, or my dog's pajamas are the computer. They need to get their money back ASAP.

As a result, they and Sun could well be handing over to IBM the very keys to being able to time the market to IBM's strategic needs above all else. All for $6.4 billion in cash, minus the profits from chopping off Sun's remaining limbs and keeping the ones that make a good Borg fit.

There should be a better outcome. Should the deal emerge, regulators should insist what IBM itself called for more than 10 years ago. Something as important as Java and other critical open software specifications (OpenSolaris?) should be in the control and ownership of a neutral standards body, not in the control of the global market dominant legacy vendor.

It’s sort of like letting General Motors decide when to build the next generation of fuel efficient and alternative energy cars. And we know how that worked out.

IBM has the deep pockets now to buy strategic advantage during an economic crisis that helps it in coming years. It's during this coming period when the cloud vision begins to stick, when the madness of how enterprise IT has evolved in cost and complexity is shaken off for something much better, faster and cheaper.

And that’s what IT has always been about.

Wednesday, March 18, 2009

Greenplum aims to eliminate massive data load 'choke points' with Scatter/Gather technology

Greenplum has taken massively parallel processing (MPP) of data to the next level with the introduction this week of its "MPP Scatter/Gather Streaming" (SG Streaming) technology, which manages the flow of data into all nodes of the database, eliminating the traditional bottlenecks with massive data loading.

The San Mateo, Calif. company, which provides large-scale analytics and data warehousing, says SG Streaming has allowed customers to achieve production-loading speeds of over four terabytes per hour with negligible impacts on concurrent database operations. [Disclosure: Greenplum is a sponsor of BriefingsDirect podcasts.]

Under the "parallel everywhere" approach to loading data flows from one or more source systems to every node of the database without any sequential choke points. This differs from traditional “bulk loading” technologies, used by most mainstream database and parallel-processing appliance vendors that push data from a single source, often over a single or small number of parallel channels, and result in fundamental bottlenecks and ever-increasing load times.

The new technology "scatters" data from all source systems across hundreds or thousands of parallel streams that simultaneously flow to all nodes of the database. Performance scales with the number of nodes, and the technology supports both large batch and continuous near-real-time loading patterns with negligible impact on concurrent database operations.

Data can be transformed and processed in-flight, utilizing all nodes of the database in parallel, for extremely high-performance extract-load-transform (ELT) and extract-transform-load-transform (ETLT) loading pipelines. Final 'gathering' and storage of data to disk takes place on all nodes simultaneously, with data automatically partitioned across nodes and optionally compressed.

It was just six months ago that Greenplum publicly unveiled how it wrapped MapReduce approaches into the newest version of its data solution. That advance allowed users to combine SQL queries and MapReduce programs into unified tasks executed in parallel across thousands of cores.

Active Endpoints aims at greater process design and implementation productivity with ActiveVOS enhancements

Active Endpoints, maker of the ActiveVOS visual orchestration system, has kicked things up a notch with the recent release of ActiveVos 6.1, which incorporates new features and functions designed to make developers more productive.

The latest offering from the Waltham, Mass. company provides what amounts to shrink-wrapped service-oriented architecture (SOA) and provides business process management (BPM) automation, while adhering to business process execution language (BPEL) standards. [Disclosure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]

There's an Active Endpoints podcast on the solution, and a new white paper on SOA implications of the process efficiencies from Dave Linthicum. We also recently did an Analyst Insights podcast on recent BPEL4People work.

Following close on the heels of version 6.0, which debuted in September, and 6.0.2, which made its appearance in December, the newest ActiveVOS offering brings features aimed at smoothing the way for developers. For example, a new tool, the "participant's view," eliminates the need for developers to manually code complex programming constructs like BPEL partner links and BPEL partner link types that are needed to define how services are to be used in a BPM application.

Another major enhancement is "process rewind." At design time, no BPM application can anticipate all of the operational issues and error handling that will be required. Process rewind gives developers the ability to rewind a process to a specific activity and redo the work without having to invoke any of the built-in compensation logic. This allows certain steps of the process need to be “redone” without impacting work already performed.

Among the other improvements:
  • Any-order development, which presents services details as graphical tables into which details can be entered at any time. This is in contrast to earlier systems in which developers needed to know the details in advance.

  • Automatic development, which eases the tasks for developers new to SOA-based BPM. Version 6.1 automatically understands “private” versus “public” web services description language (WSDL) files and creates the required WSDLs in both a standards-compliant mode and a human-understandable format.

  • Improved data handling, which allows developers to visually specify what data is needed in each activity and guides the developer through XPath and XQuery statement generation. The BPEL standard separates assignment of data to activities from the invocation of those activities. While the technical reasons for this are clear to experienced developers, for new developers this can be an impediment.
More information on new features and functions is available on the Active Endpoints "What's New" page on their Web site.

ActiveVOS is available as a perpetual license. In an internal development environment, the price is $5,000 per CPU socket. In a deployment environment, the price is $12,000 per CPU socket when the deployment environment licenses are ordered with a first-time purchase of internal development environment licenses. Annual support and maintenance is 20 percent of total license fees.

Panda Security strengthens SaaS-based PC virus protection solution for SMBs

As the whirlwind of economic pressures and heightened concerns for security push small and medium -sized businesses (SMB) toward software-as-a-service (SaaS) solutions, Panda Security has delivered added functionality to the cause with Managed Office Protection (MOP) 5.03.

Panda, with North American operations in Glendale, CA, allows individual companies as well as value added resellers (VARs) to deploy and extend its hosted security services, which originally launched in May 2008. Panda says its solution can be more than 50 percent more efficient than traditional endpoint security software.

I expect that SMBs will be more likely to seek a full package of PC support services via third parties. Those third parties will want to deliver help desk, software management, patch management and -- now -- security as a full service, cloud-based offering.

By adding the Web-based Panda SaaS security benefits, branded under the third parties, the hassle and cost of managing each desktop on premises drops significantly. And it allows the SMBs to get closer to their goal of no IT department, or at least a majority of IT support gained as a service.

Enhancements to Panda's MOP, include:
  • Optimized management of end devices through a new Web-based management console that allows administrators to resolve deployment challenges from one centralized dashboard from on any computer with an Internet connection.

  • Increased reporting flexibility that allows administrators to select from an expanded set of security reports, including executive, activity and detection reports.

  • Easier software deployment, which allows IT managers to leverage automatic uninstallers along with unique MAC addresses, facilitating personalized security settings for each end-device.

  • Simplified computer management that allows offline handling of exported files.

  • Improved client network status control, which allows VARs providing security services to SMB clients to have remote access via the service provider administration console, where they can centrally manage any update on every device in the client network.
The appeal of using a SaaS solution for security is that for cash-strapped companies it eliminates high startup and capital equipment costs, as well as the necessity to hire and train personnel to run the application. Also, it facilitates updates and patches, allowing them to be deployed quickly and easily.

The channel and PC support third parties gain a more complete package of services, while letting their partner, in this case Panda, pick up the security and on-going threats response requirements.

Another benefit comes from today's highly mobile workforce. Administrators are increasingly concerned with managing laptops belonging to traveling employees. A SaaS-based device support solution allows administrators to monitor and configure anti-malware software no matter what the employee's location.

In a recent study, Panda Security compared its SaaS product to three different traditional security products. The study found that using a SaaS product could be more than 50 percent less expensive over a two-year period than using the traditional products, when you consider staffing costs, capital expenditures, and deployment costs.

Panda MOP is available immediately in licenses sold by the seat in one- to three-year subscription packages. More information is available from www.pandasecurity.com.

IBM buying Sun Microsystems makes no sense, it's a red herring

Someone has floated a trial balloon, through a leak to the Wall Street Journal, that IBM is in "talks" to buy Sun Microsystems for $6.5 billion. The only party that would leak this information is Sun itself, and it smacks of desperation in trying to thwart an unwanted acquisition, or to positively impact another deal that Sun is weak in.

If IBM wanted to buy Sun it would have done so years ago, at least on the merits of synergy and technology. If IBM wanted to buy Sun simply to trash the company, plunder the spoils and do it on the cheap -- the time for that was last fall.

So more likely, given that Sun has reportedly been shopping itself around (nice severance packages for the top brass, no doubt), is that Sun has been too successful at selling itself -- just to the wrong party at too low of a price. This may even be in the form of a chop shop takeover. The only thing holding up a hostile takeover of Sun to sell for spare parts over the past six months was the credit crunch, and the fact that private equity firms have had some distractions.

By buying Sun IBM gains little other than some intellectual property and mySQL. IBM could have bought mySQL or open sourced DB2 or a subset of DB2 any time, if it wanted to go that route. IBM has basically already played its open source hand, which it did masterfully at just the right time. Sun, on the other hand, played (or forced) its open source hand poorly, and at the wrong time. What's the value to Sun for having "gone open source"? Zip. Owning Java is not a business model, or not enough of one to help Sun meaningfully.

So, does IBM need chip architectures from Sun? Nope, has their own. Access to markets from Sun's long-underperforming sales force? Nope. Unix? IBM has one. Linux? IBM was there first. Engineering skills? Nope. Storage technology? Nope. Head-start on cloud implementations? Nope. Java license access or synergy? Nope, too late. Sun's deep and wide professional services presence worldwide? Nope. Ha!

Let's see ... hardware, software, technology, sales, cloud, labor, market reach ... none makes sense for IBM to buy Sun -- at any price. IBM does just fine by continuing to watch the sun set on Sun. Same for Oracle, SAP, Microsoft, HP.

With due respect to Larry Dignan on ZDNet, none of his reasons add up in dollars and cents. No way. Sun has fallen too far over the years for these rationales to stand up.

Only in playing some offense via data center product consolidation against HP and Dell would buying Sun help IBM. And the math doesn't add up there. The cost of getting Sun is more than the benefits of taking money from enterprise accounts from others. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The cost of Sun is not cheap, or at least not cheap like a free puppy. Taking over Sun for technology and market spoils ignores the long-term losses to be absorbed, the decimated workforce, the fact that Cisco will now eat Sun's lunch as have the other server makers for more than five years.

So who might by Sun on the cheap, before Sun's next financial report to Wall Street? Cisco, Dell, EMC, Red Hat. That's about it for vendors. And it would be a big risk for them, unless the price tag were cheap, cheap, cheap. Anything under $4 billion might make sense. Might.

Other buyers could come in the form of carriers, cloud providers or other infrastructure service provider types. This is a stretch, because even cheap Sun would come with a lot of baggage for their needs. Another scenario is a multi-party deal, of breaking up Sun among several different kinds of firms. This also is hugely risky.

So my theory -- and it's just a guess -- is that today's trial balloon on an IBM deal is a last-ditch effort by Sun to find, solidify, or up the price on some other acquisition or exit strategy by Sun. The risk of such market shenanigans only underscores the depths of Sun's malaise. The management at Sun probably sees its valuation sinking yet gain to below tangible assets and cash value when it releases it's next quarterly performance results. ... Soon.

The economic crisis has come at a worst time for Sun than just about any other larger IT vendor. Sun, no matter what happens, will go for a fire sale deal -- not a deal of strength among healthy synergistic partners. No way.

Monday, March 16, 2009

Cisco seeks for data center what Apple created with iPhone -- a new market that stops the madness

Apple with the iPhone changed the game in mobile devices by pulling together previously disparate elements of architecture, convenience, and technology. Software and services were the keys to new levels of integration, better interfaces and a comprehensive user experience.

The result has lead to a tectonic market shift that combines stunning customer adoption, whole new types of user productivity, a thriving third-party developer community -- and mobile and PC market boundaries that are swiftly blurring. Doing the advance work of pulling together elements of the full solution -- so that the users or channel players or consultants do not -- has worked well for Apple. It was bold, risky, and it worked.

Carriers could never pull off the iPhone integration value for users. Indeed, the way carriers go to market practically forbids it. It took an outsider and new entrant to the field to change the game, to remove the complexity and cost of integration -- and pass along both the savings and seductive leap in functionality to the buyers.

With today's announcement of the Cisco Unified Computing System -- along with a deep partnership with VMware on software and management -- Cisco Systems is attempting a similar solution-level value play as Apple with the iPhone. The solution may be at the other end of the IT spectrum -- but the potential leap in value, and therefore the disruption, may be as impactful.

We're seeing a whole new packaging of the modern data center in a way that may very well change the market. It's bold, and it's risky. Cisco -- as an entrant to the full data center solution field, but with a firm command of certain key elements (like the network) -- may be able to do what the incumbent data center providers -- along with the ecology of support armies -- have not. One-stop shopping for data centers is been only a goal, never fully realized. In fact, many enterprises probably don't want any one vendor to have such control, especially when standards are in short supply. But they need lower costs and lower complexity.

Cisco, therefore, is using the latest software and standards (to SOME degree at least) to integrate the major elements of "compute, network, storage access and virtualization into a cohesive system," according to Cisco. They go on to claim this leads to "IT as a service" when combined with VMware's upcoming vSphere generation of data center virtualization and management products. I'd like to see more open source software choices in the mix, too. Perhaps the marker will demand this?

The concept remains appealing, though. Rather than have a systems integrator, or outsourcer, or major vendor, or your own IT department (or all of the above) cobble these complex data center elements together -- at high initial and ongoing monstrous cost ad infinitum -- the "integration is the data center" (as distinct from the network is the computer) has a nice ring to it.

Cisco is proposing that the next-generation data center, then, is actually an appliance -- or a series of like appliances. Drop in, turn on, tune in and run your applications and services faster, better, cheaper. Works if it works. This may be too much for most seasoned IT professionals to stomach, but it's worth a try, I suppose.

And this will, of course, greatly appeal during a prolonged period of economic stress and uncertainty. Say hello to 2010. And the approach could be appealing to enterprises, carriers, hosting companies, and a variety of what are loosely called cloud providers. Indeed, the more common the data center architecture approaches across all of these players, the more likely for higher-order efficiencies and process-level integrations. Federating, sharing, tiering, cost-sharing -- all of these become more possible to the heightened productivity of the community of participants.

The cloud of clouds needs a common architecture to reach it's potential. Remember Metcalfe's Law on the network's value based on number of participants on it? Well, supplant "node" and participant with "data center" and the Law and the network gain entirely new levels of value if the interoperability is broad and deep.

Make no mistake, the next generation data center business is a very large, multi-tens-of-billions of dollars market, and the competition is global, well-positioned, cash-secure and tough. Selling these data center appliances and "IT as a service" into individual accounts will be a huge challenge, especially if they are perceived as replacements alone. The Cisco solution needs to work well inside, alongside and inclusive of the other stuff, and the integrators have deep claws into the very accounts Cisco must enter.

We'll need to see the Cisco Unified Computing System act as a data center of data centers first. It's appeal, then, must be breathtaking to supplant the frisky incumbents, all of which also understand the importance of virtualization and low-cost hardware.

IBM, HP, Oracle, EMC, Microsoft, Sun, and the global SIs -- all will see any market game changing by Cisco as disruptive in perhaps the wrong way. But the enterprise IT market is ripe for major better ways of doing things, just like the buyers of iPhone have been for the last two years. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

UPDATE: HP has a response.

At the very least, Cisco's salvo will accelerate the shifts already under way in the next generation data center market toward highly-efficient on-premises clouds, complete and integrated applications support solutions, a deep adoption of virtualization -- and probably to a lot less total cost, real estate use, and energy demand as a result. The move by Cisco could also spur the embrace of open source software, along with standards, standards, standards. It's hard to see the economics working without them.

Already, Red Hat and Cisco announced a global OEM partnership. Cisco will sell and support Red Hat Enterprise Linux as part of its Unified Computing System, and will also support the newly announced Red Hat Enterprise Virtualization portfolio when it ships.

"Combined, Red Hat and Cisco will offer customers next-generation computing beyond RISC, beyond UNIX, beyond yesterday's legacy solutions for both virtualized and non-virtualized systems," says the statement.

Cisco and VMware are leaders in their areas, for sure, but they will need a community of global partners like Red Hat to pull this off. How about the larger open source universe? Unlike with Apple, it's a lot harder to create a data center support ecology than an app store. So the risks here are pretty huge. The enemy of my enemy is my friend effect may well kick in ... or not.

Or even more weirdness may ensue. What if Microsoft wanted in in a big way, given where it needs to go? What if Windows became the default virtualized container in Cisco's shiny new data center appliance? Disruption can be, well, disruptive.

Cisco has been seeking a way for many years now to extend its networking successes into new businesses. It has bought, it's built, and it's partnered -- but not to great effect in the past. Could this be the big one? The one that works? Is this the new $20 billion business that Cisco so desperately needs?

Sunday, March 15, 2009

Forrester Research: SaaS gains enterprise adoption, expands beyond 'vanilla' offerings

Software as a service (SaaS) is coming into its own, as interest and adoption continue to grow among enterprises and SaaS itself expands to meet the challenge.

This is the conclusion of a Forrester Research report, TechRadar For Sourcing & Vendor Management Professionals: Software as a Service. After talking to customers, vendors, and researchers, Forrester discovered that about 21 percent of enterprises were piloting or already using SaaS and another 26 percent are interested in it or considering it.

I expect this growth of SaaS use to increase under the dour economy as companies look to increase applications productivity but without any up-front capital spending, and also as they shut off expensive standalone applications on older hardware. SaaS as an economic appeal well suited to the challenges facing IT managers.

At the same time, says Forrester, companies are taking a more strategic approach to SaaS, which until now often flew in under the radar. That means IT didn't bring SaaS apps in, workers and managers did. Part of the strategic interest now comes from IT too -- to rein in system redundancies and costs.

Any responsible IT department should now conduct the audits and due-dilgence to determine which old and new applications would be best delivered as SaaS from third parties. The ability to absord these apps well also puts IT department in a better position to leverage cloud-based services and infrastructure fabrics.

SaaS's march into enterprises is tempered, however, from real or perceived increased security risks that come from using off-premises systems. This may account for the fact that the number of people not interested in using SaaS has increased over the past year. Do we hav a culture gap on SaaS use? I advise enterprises to thing like start-ups these days -- and that means use SaaS aggressively.

Another key finding of the March 13 report: SaaS offerings have proliferated and moved beyond their traditional "vanilla" customer relationship management (CRM) and human capital management functions.

Forrester determined 13 areas where SaaS applications are making headway. These include:
One benefit of an increased use of SaaS, according to Forrester, is that it has changed the software game, giving business users ownership over the full application lifecycle. That upside is balanced by the downside of new risks, especially in the areas of contracts, data security, and data access privileges.

The bottom line for enterprises considering getting into the SaaS arena:
Sourcing and vendor management executives must keep ahead of the growing trend to understand where SaaS is most heavily used and where it lurks on the horizon, so that they can enable their business users to be more successful in business led SaaS deployments as well as to consider SaaS as a viable alternative to IT-led vendor evaluations. Regardless of where the SaaS deployment originates, sourcing and vendor management executives have a key role to play in contracts and pricing, due diligence, and vendor governance and risk.
The full Forrester report is available from http://www.forrester.com/go?docid=46747

None of this is surprising news to regular readers of BriefingsDirect or those who listen regularly to the podcasts. Our analysts and guests talk about the growing reliance on SaaS applications, especially in view of the economic decline. In fact, our year-end predictions for 2009 focused quite intensely on the role of SaaS in helping companies weather the storm -- and even chart a new course for the enterprise.

One of our regular analyst-guests and fellow ZDNet blogger Phil Wainewright charted out most of the 2008 developments over a year ago in his 2008 predictions. His predictions were based on what he saw as an awakening among users and vendors as to the potential of SaaS.

Jeff Kaplan in his Think IT Strategies blog made many of the same arguments in his 2009 predictions, in which he predicted that the thinking among IT executives was beginning to shift from whether to do SaaS to how to do it.

These bullish predictions and observations stand in stark contrast to a crepe-hanging piece last July in BusinessWeek, in which Gene Marks of the Marks Group declared SaaS overhyped, overpriced, and in need to debunking. The Marks Group sells customer relationship, service, and financial management tools to small and midsize businesses.

Nothing like a recession to focus the mind on practicality over ideology.

Thursday, March 12, 2009

BriefingsDirect analysts discuss solutions for bringing human interactions into business process workflows

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 37, a periodic discussion and dissection of software, services, SOA and compute cloud-related news and events with a panel of IT analysts.

In this episode, recorded Feb. 13, 2009, our guests examine the essential topic of bringing human activity into alignment with standards-based IT supported business processes. We revisit the topic of BPEL4People, an OASIS specification.

The need to automate and extend complex processes is obvious. What's less obvious, is the need to join the physical world of people, their habits, needs, and perceptions with the artificial world of service-oriented architecture (SOA) and business process management (BPM).

This interaction or junction will become all the more important as cloud-based services become more common.

Our discussion, moderated by me, includes noted IT industry analysts and experts Michael Rowley, director of technology and strategy at Active Endpoints; Jim Kobielus, senior analyst at Forrester Research; and JP Morgenthal, independent analyst and IT consultant.

Here are some excerpts:
Rowley: [With BPEL4People] you can automate the way people work with their computers and interact with other people by pulling tasks off of a worklist and then having a central system, the BPM engine, keep track of who should do the next thing, look at the results of what they have done, and based on the data, send things for approval.

It basically captures the business process, the actual functioning of a business, in software in a way that you can change over time. It's flexible, but you can also track things, and that kind of thing is basic.

... One of the hardest questions is what you standardize and how you divvy up the standards. One thing that has slowed down this whole vision of automating business process is the adoption of standards. ... The reason [BPM] isn't at that level of adoption yet is because the standards are new and just being developed. People have to be quite comfortable that, if they're going to invest in a technology that's running their organization, this is not just some proprietary technology.

The big insight behind BPEL4People is that there's a different standard for WS-Human Task. It's basically keeping track of the worklist aspect of a business process versus the control flow that you get in the BPEL4People side of the standard. So, there's BPEL4People as one standard and the WS-Human Task as another closely related standard.

By having this dichotomy you can have your worklist system completely standards based, but not necessarily tied to your workflow system or BPM engine. We've had customers actually use that. We've had at least one customer that's decided to implement their own human task worklist system, rather than using the one that comes out of the box, and know that what they have created is standards compliant.

All of the companies involved -- Oracle, IBM, SAP, Microsoft, and TIBCO, as well as Active Endpoints -- seem to be very interested in this. One interesting one is Microsoft. They are also putting in some special effort here.

One value of a BPM engine is that you should be able to have a software system, where the overall control flow, what's happening, how the business is being run can be at the very least read by a nontechnical user. They can see that and say, "You know, we're going through too many steps here. We really can skip this step. When the amount of money being dealt with is less than $500, we should take this shortcut."

That's something that at least can be described by a lay person, and it should be conveyed with very little effort to a technical person who will get it or who will make the change to get it so that the shortcut happens.

Koblielus: It's critically important that the leading BPM and workflow vendors get on board with this standard. ... This is critically important for SOA, where SOA applications for human workflows are at the very core of the application.

... BPEL4People, by providing an interoperability framework for worklisting capabilities of human workflow systems, offers the promise of allowing organizations to help users have a single view of all of their tasks and all the workflows in which they are participating. That will be a huge productivity gain for the average information worker, if that ever comes to pass.

... One thing that users are challenged with all the time in business is the fact that they are participating in so many workflows, so many business processes. They have to multi-task, and they have to have multiple worklists and to-do lists that they are checking all the time. It's just a bear to keep up with.

Morgenthal: Humans interact with humans, humans interact with machines, and data is changing everywhere. How do we keep everything on track, how do we keep everything coordinated, when you have a whole bunch of ad-hoc processes hitting this standardized process? That requires some unique features. It requires the ability to aggregate different content types together into a single place.

One key term that has been applied here industry wide I found only in the government. They call this "suspense tracking." That's a way of saying that something leaves the process and goes into "ad hoc land." We don't know what happens in there, but we control when it leaves and we control when it comes back.

I've actually extended this concept quite a bit and I am working on getting some papers and reports written around something I am terming "business activity coordination," which is a way to control what's in the black hole.

So, you have these ongoing ad hoc processes that occur in business everyday and are difficult to automate. I've been analyzing solutions to this, and business activity coordination is that overlap, the Venn diagram, if you will, of process-centric and collaborative actions. For a human to contribute back and for a machine to recognize that the dataset has changed, move forward, and take the appropriate actions from a process-centric standpoint, after a collaborative activity is taking place is possible today, but is very difficult.

One thing I'm looking at is how SharePoint, more specifically Windows SharePoint Services, acts as a solid foundation that allows humans and machines to interact nicely. It comes with a core portal that allows humans to visualize and change the data, but the behavioral connections to actually notify workflows that it's time to go to the next step, based on those human activities, are really critical functions. I don't see them widely available through today's workflow and BPM tools. In fact, those tools fall short, because of their inability to recognize these datasets.

... I don't necessarily agree with the statement earlier that we need to have tight control of this. A lot of this can be managed by the users themselves, using common tools. ... Neither WS-Human Task nor BPEL4People addresses how I control what's happening inside the black hole.

Rowley: Actually it does. The WS-Human Task does talk about how do you control what's in the black hole -- what happens to a task and what kind of things can happen to a task while its being handled by a user. One of the things about Microsoft involvement in the standards committee is that they have been sharing a lot with us about SharePoint and we have been discussing it. This is all public. The nice thing about OASIS is that everything we do is in public, along with the meeting notes.

The Microsoft people are giving us demonstration of SharePoint, and we can envision as an industry, as a bunch of vendors, a possibility of interoperability with a BPEL4People business process engine like the ActiveVOS server. Maybe somebody doesn't want to use our worklist system and wants to use SharePoint, and some future version of SharePoint will have an implementation of WS-Human Task, or possibly somebody else will do an implementation of WS-Human Task.

Until you get the standard, that vision that JP mentioned about having somebody use SharePoint and having some BPM engine be able to coordinate it, isn't possible. We need these standards to accomplish that.

A workflow system or a business process is essentially an event-based system. Complex Event Processing (CEP) is real-time business intelligence. You put those two together and you discover that the events that are in your business process are inherently valuable events.

You need to be able to discover over a wide variety of business processes, a wide variety of documents, or wide variety of sources, and be able to look for averages, aggregations and sums, and the joining over these various things to discover a situation where you need to automatically kickoff new work. New work is a task or a business process.

What you don't want to have is for somebody to have to go in and monitor or discover by hand that something needs to be reacted to. If you have something like what we have with ActiveVOS, which is a CEP engine embedded with your BPM, then the events that are naturally business relevant, that are in your BPM, can be fed into your CEP, and then you can have intelligent reaction to everyday business.

... Tying event processing to social networks makes sense, because what you need to have when you're in a social network is visibility, visibility into what's going on in the business and what's going on with other people. BPM is all about providing visibility. ... If humans are involved in discovering something, looking something up, or watching something, I think of it more as either monitoring or reporting, but that's just a terminology. Either way, events and visibility are really critical.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.