Wednesday, December 2, 2009

BriefingsDirect analysts unpack the psychology of project management via 'Pragmatic Enterprise 2.0' and SOA

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or  download a copy. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 47. Our topic this week on BriefingsDirect Analyst Insights Edition centers on how to define, track and influence how people actually adapt to and adopt technology.

Any new information technology might be the best thing since sliced bread, but if people don’t understand the value or how to access it properly -- or if adoption is spotty, or held up by sub-groups, agendas, or politics -- then the value proposition is left in the dust. Perceptions count ... a lot.

A crucial element for avoiding and overcoming social and user dissonance with technology adoption is to know what you are up against, in detail. Yet, data and inferences on how people really feel about technology is often missing, incomplete, or inaccurate.

In this discussion, we hear from two partners who are working to solve this issue pragmatically. First, with regard to Enterprise 2.0 technologies and approaches. And, if my hunch is right, it could very well apply to service-oriented architecture (SOA) adoption as well.

I suppose you could think of this as a pragmatic approach to developing business intelligence (BI) values for people’s perceptions and their ongoing habits as they adopt technology in a business context.

So please join Michael Krigsman, president and CEO of Asuret, as well as Dion Hinchcliffe, founder and chief technology officer at Hinchcliffe & Co. to explain how Pragmatic Enterprise 2.0 works. Our panel also includes Joe McKendrick, prolific blogger and analyst;  Miko Matsumura, vice president and chief strategist at Software AG; Ron Schmelzer, managing partner at ZapThink;  Tony Baer, senior analyst at Ovum; Sandy Rogers, independent industry analyst, and Jim Kobielus, senior analyst at Forrester Research.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS, visual orchestration system, and through the support of TIBCO Software.

And the discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts: 
Hinchcliffe: ... As many of you know, we've been spending a lot of time over the last few years talking about how things like Web 2.0 and social software are moving beyond just what’s happening in the consumer space, and are beginning to really impact the way that we run our businesses.

More and more organizations are using social software, whether this is consumer tools or specific enterprise-class tools, to change the way they work. At my organization, we've been working with large companies for a number of years trying to help them get there.

This is the classic technology problem. Technology improves and gets better exponentially, but we, as organizations and as people, improve incrementally. So, there is a growing gap between what’s possible and what the technology can do, and what we are ready to do as organizations.

I've been helping organizations improve their businesses with things like Enterprise 2.0, which is social collaboration, using these tools, but with an enterprise twist. There are things like security, and other important business issues that are being addressed.
Businesses are about collaboration, team work, and people working together . . .

But, I never had a way of dealing with the whole picture. We find that that folks need a deep introduction to what the implications are when you have globally visible persistent collaboration using these very social models and the implications of the business.

Michael Krigsman, of course, is famous for his work in IT project risk -- what it takes for projects to succeed and what causes them not to succeed. I saw this as the last leg of the stool for a complete way of delivering these very new, very foreign models, yet highly relevant models, to the way that organizations run their business.

Businesses are about collaboration, team work, and people working together, but we have used things like email, and models that people trust a lot more than these new tools.

There is usually a lot of confusion and uncertainty about what’s really taking place and what the expectations are. And Michael, with Asuret, brings something to the table. When we package it as a service that essentially brings these new capabilities, these new technologies and approaches, it manages the uncertainty about what the expectations are and what people are doing.

Krigsman: Think about business transformation projects -- any type. This can be any major IT project, or any other type of business project as well. What goes wrong? If we are talking about IT, it's very tempting to say that the technology somehow screws up. If you have a major IT failure, a project is late, the project is over budget, or the project doesn’t meet expectations or plan, it's extremely easy to point the finger at the software vendor and say, "Well, the software screwed up."

If we look a little bit deeper, we often find the underlying drivers of the project that is not achieving its results. The underlying drivers tend to be things like mismatched expectations between different groups or organizations.

For example, the IT organization has a particular set of goals, objectives, restrictions, and so forth, through which they view the project. The line of business, on the other hand, has its own set of business objectives. Very often, even the language between these two groups is simply not the same.

As another example, we might say that the customer has a particular set of objectives and the systems integrator has its own objectives for the particular project. The customer wants to get it done as fast and as inexpensively as possible. The systems integrator is often -- and I shouldn’t make generalizations, but -- interested in maximizing their own revenue.

If we look inside each of these groups, we find that inside the groups you have divisions as well, and these are the expectation mismatches that Dion was referring to.

If we look at IT projects or any type of business transformation project, what’s the common denominator? It's the human element. The difficulty is how you measure, examine, and pull out of a project these expectations around the table. Different groups have different key performance indicators (KPIs), different measures of success, and so forth, which create these various expectations.

Amplifying weak signals

How do you pull that out, detect it inside the project, and then amplify what we might call these weak signals? The information is there. The information exists among the participants in the project. How do you then amplify that information, package it, and present it in a way that it can be shared among the various decision-makers, so that they have a more systematic set of inputs for making decisions consistently based on data, rather than anecdote? That’s the common thread.

... We're not selling software. We offer a service, and the service provides certain results. However, we've developed software, tools, methods, techniques, and processes that enable us to go through this process behind the scenes very efficiently and very rapidly.

Rogers: What we discovered in our studies is that one of the fundamental needs in running any type of business project -- an SOA project or an Enterprise 2.0 IT project -- is the ability to share information and expose that visibility to all parties at levels that will resonate with what matters to them the most, but also bring them outside of their own domain to understand where dependencies exist and how one individual or one system can impact another.

One of the keys, however, is understanding that the measurements and the information need to get past system-level elements. If you design your services around what business elements are there and what matters to the business, then you can get past that IT-oriented view in bringing business stakeholders in aligned management and business goals to what transpires in the project.

Any way that you can get out -- web-based, easy-access dashboard with information -- and measure that regularly, you can allow that to proliferate through the organization. Having that awareness can help build trust, and that’s critical for these projects.

Baer: What Dion and Michael are talking about is an excellent idea in terms of that, in any type of environment where there is a lack of communication and trust, data is essential to really steer things. Data, and also assurances with risk management and protection of IT and all that. But, the fact is that there are some real clear hurdles.

An example is this project that my wife is working on at the moment. She was brought in as a consultant to a consulting firm that's working for the client, and each of them have very different interests. This is actually in a healthcare-related situation. They're trying to do some sort of compliance effort, and whoever was the fount of wisdom there postponed the most complex part of this problem to the very end. At the very end, they basically did a Hail Mary pass bringing a few more bodies.

They didn't look for domain expertise or anything. Essentially it's like having eight women be pregnant and having them give birth to a baby in a month. That's essentially the push they are doing.

On top of that, there is also a fear among each tribe of the other coming up with a solution that makes the other tribes look bad. So, I can't tell exactly the feedback from this, but I do know that my wife came in as a process expert. She had a pretty clear view on how to untie the bottlenecks.

Krigsman: We gather a lot of data. The essential elements have been identified during this conversation. ... It's absolutely accurate to look at this tribally. Tony spoke about tribal divisions and the social tribal challenges.

The fundamental trick is how to convert this kind of trust information. Jim was talking about collaborative project governance. All of this relates to the fact that you've got various groups of people. They have their own issues, their own KPIs, and so forth. How do you service issues that could impact trust and then convert that to a form that can then be examined dispassionately. I'd love to use the word "objectively," but we all know that being objective is a goal and it's never outcome that you can ultimately reach.

At least you have a way to systematically and consistently have metrics that you can compare. And then ... when you want to have a fight, at least you are fighting about KPIs, and you don't have people sitting in a conference room saying, "Well, my group thinks this. We believe the project 'blank.' If somebody says the same, my group thinks that." Well, let's have some common data that's collected across the various information silos and groups that we can then share and look at dispassionately.

Schmelzer: ... We think that the whole idea of project management is just an increasing fallacy in IT anyway. There is no such thing now. It's really a discrete project.

Can you really say that some enterprise software that you maybe buying or building yourself or maybe even sourcing as a service is really completely disconnected from all the other projects that you have going on or the other technology? The answer is, they are not.
The enterprise is a collection of many different IT projects, some of which are ongoing, some of which may have been perceived to be dead or no longer in development, or maybe some are in the future.

So, it's very hard to do something like discrete project management, where you have defined set of requirements and a defined timeline and defined budget, and make the assumption or the premise, which is false, that you're not going to be impacting any of the other concurrently running projects.

We think of this like a game of pick-up sticks. The enterprise is a collection of many different IT projects, some of which are ongoing, some of which may have been perceived to be dead or no longer in development, or maybe some are in the future. The idea that you could take any one of those little projects, and manipulate them without impacting the rest of the pile is clearly becoming false.

McKendrick: Michael and Dion, I think you're on the right track. In fact, it's all about organization. It's all about the way IT is organized within the company and, vice-versa, the way the company organizes the IT department. I’ll quote Mike Hammer, the consultant, not the detective, "Automate a mess and you get an automated mess." That's what's been happening with SOA.

Upper management either doesn't understand SOA or, if they do, it's bits and pieces -- do this, do that. They read Enterprise Magazine. The governance is haphazard, islands across the organization, tribal. Miko talks about this a lot in his talks about the tribal aspect. They have these silos and different interest groups conflicting.

There's a real issue with the way the whole process is managed. One thing I always say is that the organizations that seem to be getting SOA right, as Michael and Dion probably see with the Enterprise 2.0 world, are usually the companies that are pretty progressive. They have a pretty good management structure and they're able to push a lot of innovations through the organization.

Matsumura: ... This type of an approach really reflects the evolution of the best practice of adoption. Some of the themes that we've been talking about today around this sharing of information, communication, and collaboration, are really are essential for success.

I do want to caution just a little bit. People talk about complexity and they create a linkage between complexity and failure. It's more important to try to look at, first of all, the source of the problem. Complexity itself is not necessarily indicative of a problem. Sure, it's correlated, but ice-cream consumption is correlated with the murder rate, just as a function of when temperatures get hot, both things happen to increase. So complexity is also a measure of success and scale.

... The issue it comes down to for me is what Sandy said, which is that the word "trust," which is thrown in at the very end, turns out as extremely expensive. That alignment of organization and trust is actually a really important notion.

What happens with trust is that you can put things behind a service interface. Everything that's behind a service interface has suddenly gotten a lot less complex, because you're not looking at all that stuff. So, the reduction of complexity into manageability is completely dependent on this concept of trust and building it.

Kobielus: ... A dashboard is so important when you are driving a vehicle, and that's what a consolidated view of KPIs and metrics provides. They are a dashboard in the BI sense, and that's what this is, project intelligence dashboard for very complex project or mega programs that are linked projects. In other words, SOA in all of its manifestations.

In organization, you have to steer your enterprise in a different direction. You need obviously to bring together many projects and many teams across many business domains. They all need to have a common view of the company as a whole -- its operations, various stakeholders, their needs, and the responsibilities internally on various projects of various people. That's highly complex. So, it’s critical to have a dashboard that's not just a one-way conduit of metrics, from the various projects and systems.

In the BI world, which I cover, most of the vendors now are going like crazy to implement more collaboration and work-flow and more social community-style computing capabilities into their environments. It's not just critical to have everybody on the same page in terms of KPIs, but to have a sideband of communication and coordination to make sure that the organization is continuing to manage collectively according to KPIs and objectives that they all ostensibly agree upon.

Hinchcliffe: ... The way the process works is that we come into a client with an end-to-end service. Most organizations -- and this is going to be true of Enterprise 2.0 or SOA -- are looking at solving a problem. There's some reason why they think that this is going to help, but they're often not sure.
There are often a lot of unstated assumptions about how to apply technology to a business problem and what the outcome is going to be.

We start with this strategy piece that looks at the opportunity and tries to identify that for them and helps them correct the business case to understand what the return on investment (ROI) is going to be. To do that, you really have to understand what the needs of the organization are. So, one of the first things we do is bring Michael's process in, and we try and get ground truths.

There are often a lot of unstated assumptions about how to apply technology to a business problem and what the outcome is going to be. Particularly with SOA, you have so many borders that are typically involved. It's the whole concept around Conway's Law that the architecture tends to look back at the structure of the organization, because those are the boundaries in which everything runs.

One of the ways that we can assure that we have ground truth is by applying this dispassionate measurement process upfront to understand what people's expectations are, what their needs are, and what their concerns are. It's much more than just a risk-management approach. It's a way to get strategic project intelligence in a way that hasn't been possible before. We're really excited about it.

A lot of uncertainty

My specialty has always been focusing on emerging technology. There is always a lot of uncertainty, because people don't know necessarily what it is. They don't know what to expect. They have to have a way of understanding what that is, and you have an array of issues including the fact there are people who aren't willing to normally admit that they don't know things.

But, here is a way to safely and succinctly, on a regular basis, surface those issues and deal with them before they begin to have issues in the project. We then continue on through implementation and then regular assessments on the KPIs that can cause potential issues down the road. I think it's a valuable service. It's low impact, compared to another traditional interview process. This is something most organizations can afford to do on a regular basis.

Krigsman: ... I am so hesitant to use the term psychological, because it has so many connotations associated with it. But, the fact is that we spoke about perception earlier, and there has been a lot of discussion of trust and community and collaboration. All of these issues fundamentally relate to how people work together. These are the drivers of success, and especially the drivers of lack of success on projects of every kind.

It therefore follows that, if we want our projects to be governed well and to succeed, one way or another we have to touch and look at these issues. That’s precisely what we're doing with Asuret and it’s precisely the application that we have taken with Dion into Pragmatic Enterprise 2.0. You have to deal with these issues.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or  download a copy. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Upside case study report shows connections between BPM and security best practices

This guest post comes courtesy of David A. Kelly, principal analyst at Upside Research.

By David A. Kelly

Not only are today’s IT environments more complex than ever before, but the current economic climate is making it more difficult for IT organizations to easily and cost-effectively meet changing business requirements. What’s needed is a way for organizations to streamline business processes, increase efficiency, and empower business users -- rather than IT -- to be at the forefront of business-process change. In many cases, this is where a good business-process management (BPM) solution comes in.

As part of a project with Active Endpoints, Upside Research, Inc. recently interviewed a national government security organization that had a critical need to manage the security of files exchanged among users, screening out malware, malicious code, and viruses. [Disclosure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]

While the organization had identified appropriate anti-virus and security software, it needed a solution that could automate and manage the actual process of shepherding unknown files through a battery of security screenings, reporting on results, managing the state, and raising exceptions when a file needed to be investigated further.

Specifically, the organization needed to find a way to automate file and information sharing securely across a wide range of mobile users and to streamline security compliance efforts and ensure consistency. After considering multiple commercial and open-source solutions, the organization selected ActiveVOS from Active Endpoints.

Both the prototype and final solution took only a month to complete. The production version was completed in December 2008 and rolled out in 2009. Now, when files are being transferred in and out of the organization's network, the file-inspection process fires off in the background and the ActiveVOS process management solution takes over.

Multiple business rules

The ActiveVOS BPM solution passes each file, as determined by multiple business rules, through the appropriate filters and, if required, sends them to people. Once the filtering is complete, the results are reported back to ActiveVOS, which then takes the appropriate actions of sending an error message if it failed, or sending an approval if it passes. When a file passes through all the necessary filters, it is authorized for transfer and stored permanently on the file-sharing system.

ActiveVOS uses business process execution language (BPEL) and web services interfaces to integrate seamlessly with multiple commercial antivirus, security, and anti-malware programs. Because of the standards-based aspect of the solution, everything can be wrapped in a web service. The program then uses BPEL to route files to the necessary web services, as determined by business rules, and manages the security filtering process.

The resulting business benefits have already been significant, and the organization expects them to increase, as it expands the deployment footprint and use of the solution for automated news and information feeds.
The solution also reduced resolution time for blocked files by up to 60 percent and eliminated costly script writing, which has been replaced by automatically generated BPEL code.

Based on its interviews, Upside Research calculated the organization saw an 80 percent time reduction for changing business processing for each security policy update. The solution has also increased visibility to operators and security auditors, enabling them to track documents being transferred in and out of the agency networks in real time. The solution also reduced resolution time for blocked files by up to 60 percent and eliminated costly script writing, which has been replaced by automatically generated BPEL code.

Many companies considering process automation solutions can learn from this government agency’s experience. Instead of opting to go with an expensive, coding-heavy solution that would have taken more time to implement, and despite having in-house experts, the agency opted to try a new vendor and implement a solution that delivered flexibility and speed of implementation.

Too often, a company will continue to use a solution that may be comfortable, but is not optimal for a particular project. This is a good example of a company successfully breaking that habit.

The full report can be downloaded from the Active Endpoints web site.

This guest post comes courtesy of David A. Kelly, principal analyst at Upside Research.

You may also be interested in:

Monday, November 30, 2009

The more Oracle says MySQL not worth much, the more its actions say otherwise

As the purgatory of Oracle's under-review bid to buy Sun Microsystems for $7.4 billion drags on, it's worth basking in the darn-near sublime predicament Oracle has woven for itself.

Oracle has uncharacteristically found itself maneuvered (by its own actions) into a rare hubristic place where it's:
  • Footing the bill for the publicity advancement of its quarry ... MySQL is more famous than ever, along with its low-cost and open attributes.
  • Watching the value of its larger quarry, Sun Microsystems, dwindle by the day as users flee the SPARC universe in search of greener (and leaner) binary pastures.
  • Giving open source middleware a boost in general too as Oracle seems to saying that MySQL is worth hundreds of millions of dollars (dead or alive); the equivalent of what it's losing by not spinning MySQL out of the total Sun package.
  • Both denigrating and revering the fine attributes of the awesome MySQL code and community, leaving the other database makers happy to let Oracle pay for and do their dirty work of keeping MySQL under control.
This last point takes the cake. IBM, Microsoft and Sybase really don't want MySQL to take over with world, err ... Web, any time soon, either. But they also want to coddle the developers who may begin with MySQL and then hand off to the IT operators who may be inclined, err ... seduced, to specify a commercial RDB ... theirs ... for the life of the app.

So it's a delicate dance to profess love for MySQL while setting the snare to eventually tie those new apps to the costly RDBs and associated Java middleware (and hardware, if you can). Let's not also forget the budding lust for all things appliance by certain larger vendors (Oracle included).

If Oracle, by its admission to the EU antitrust mandarins, thinks MySQL has little market value and is not a direct competitor to its heavy-duty Oracle RDB arsenal, than why doesn't it just drop MySQL, by vowing to spin it out or sell it? Then the Sun deal would get the big rubber stamp.

It's because not of what MySQL is worth now, but what it may become. Oracle wants to prune the potential of MySQL while not seeming to do anything of the sort.

The irony is that Oracle has advanced MySQL, lost money in the process, and helped its competitors -- all at the same time. When Oracle buys Sun and controls MySQL the gift (other than to Microsoft SQL Server) keeps on giving as the existential threat to RDBs is managed by Redwood Shores.

And we thought Larry Ellison wasn't overly charitable.

Wednesday, November 18, 2009

IBM feels cozy on sidelines as Oracle-Sun deal languishes in anti-trust purgatory

You have to know when to hold them, and when to fold them. That's the not just slightly smug assessment by IBM executives as they reflect -- with twinkles in their eyes -- on the months-stalled Oracle acquisition of Sun Microsystems, a deal that IBM initially sought but then declined earlier this year.

Chatting over drinks at the end of day one of the Software Analyst Connect 2009 conference in Stamford, Conn., IBM Senior Vice President and IBM Software Group Executive Steve Mills told me last night he thinks the Oracle-Sun deal will go through, but it won't necessarily be worth $9.50 a share to Oracle when it does.

"He (Oracle Chairman Larry Ellison) didn't understand the hardware business. It's a very different business from software," said Mills.

Mills seemed very much at ease with IBM's late-date jilt of Sun (Sun was apparently playing hard to get in order to get more than $9.40/share from Big Blue's coffers). IBM's stock price these days is homing in on $130, quite a nice turn of events given the global economy.

Sun is trading at $8.70, a significant discount to Oracle's $9.50 bid, reflecting investor worries about the fate of the deal now under scrutiny by European regulators, Mill's views notwithstanding.

IBM Software Group Vice President of Emerging Technology Rod Smith noted the irony -- perhaps ancient Greek tragedy-caliber irony -- that a low market share open source product is holding up the biggest commercial transaction of Sun's history. "That open source stuff is tricky on who actually makes money and how much," Smith chorused.

Should Mills's prediction that Oracle successfully maintains its bid for Sun prove incorrect, it could mean bankruptcy for Sun. And that may mean many of Sun's considerable intellectual property assets would go at fire-sale prices to ... perhaps a few piecemeal bidders, including IBM. Smith just smiled, easily shrugging off the chill (socks in tact) from the towering "IBM" logo ice sculpture a few steps away.

And wouldn't this hold up go away if Sun and/or Oracle jettisoned MySQL? Is it pride or hubris that makes a deal sour for one mere grape? Was the deal (and $7.4 billion) all about MySQL? Hardly.

Many observers think that Sun's Java technology -- and not its MySQL open source database franchise -- should be of primary concern to European (and U.S.) anti-trust mandarins. I have to agree. But Mills isn't too concerned with Oracle's probable iron-grip on Java ..., err licensing. IBM has a long-term license on the technology, the renewal of which is many years out. "We have plenty of time," said Mills.

Yes, plenty of time to make Apache Harmony a Java doppelganger -- not to mention the Java market-soothing effects of OSGi and Eclipse RCP. [Hey, IBM invented Java for the server for Sun, it can re-invent it for something else ... SAP?]

Unlike some software titans, Mills is clearly not living in a "reality distortion field" when it comes to Oracle's situation.

"We're in this for the long haul," said Mills, noting that he and IBM have have been competing with Oracle since August 1993 when IBM launched its distributed DB2 product. "All of our market share comes at the expense of Oracle's," said Mills. "And we love to do benchmarks again Oracle."

Even as the Fates seem to be on IBM's side nowadays, the stakes remain high for the users of these high-end database technologies and products. It's my contention that we're only now entering the true data-driven decade. And all that data needs to run somewhere. And it's not going to be in MySQL, no matter who ends up owning it.

HP offers slew of products and services to bring cost savings and better performance to virtual desktops

Hewlett-Packard (HP) this week unleashed a barrage of products aimed at delivering affordable and simple computing experiences to the desktop.

These include thin-client and desktop virtualization solutions, as well as a multi-seat offering that can double computing seats. At the same time, the company targeted the need for data security with a backup and recovery system for road warriors. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The thin-client offerings from the Palo Alto, Calif. company include the HP t5740 and HP t5745 Flexible Series, which feature Intel Atom N280 processors and an Intel GL40 chipset. They also provide eight USB 2.0 ports and an optional PCI expansion module for easy upgrades.

The Flexible Series thin clients support rich multimedia for visual display solutions, including the new HP LD4700 47-inch Widescreen LCD Digital Signage Display, which can run in both bright and dim lighting while maintaining longevity, and can be set in either a horizontal or vertical position. With the new HP Digital Signage Display (DSD) Wall Mount, users can hang the display on a wall to showcase videos, graphics or text in a variety of commercial settings where an extra-large screen is desired.

The HP t5325 Essential Series Thin Client is a power-efficient thin client with a new interface that simplifies setup and deployment. All new HP thin clients include intuitive setup tools to streamline configuration and management. These include the ThinPro Setup Wizard for Linux and HP Easy Config for Microsoft Windows.

In addition, HP thin clients also include on-board utilities that automate deployment of new connections, properties, low-bandwidth add-ons, and image updates from one centralized repository to thousands of thin clients.

Client virtualization

Three new client virtualization architectures combine Citrix XenDesktop 4, Citrix XenApp or VMware View with HP ProLiant servers, storage and thin clients to provide midsize to large businesses with a range of scalable offerings.

HP ProLiant WS460c G6 Workstation Blade brings centralized, mission-critical security to workstation computing and allows individuals or teams to work and collaborate remotely and securely. This solution meets the performance and scalability needs for high-end visualization and handling of large model sizes demanded by enterprise segments such as engineering and oil and gas.

HP Client Automation 7.8, part of the HP Business Service Automation software portfolio allows customers to deploy and migrate to a virtual desktop infrastructure environment and manage it through the entire life cycle with a common methodology that reduces management costs and complexity. Customers also capture inventory and usage information to help size their initial virtual client deployment and reoptimize as end-user needs change over time.

The HP MultiSeat Solution stretches the computing budgets of small businesses and other resource-constrained organizations by delivering up to twice the computing seats as traditional PCs for the same IT spend.

HP MultiSeat uses the excess computing capacity of a single PC to give up to 10 simultaneous users an individualized computing experience. This is designed to help organizations affordably increase computing seats and provide a simple setup, as well as reduce energy consumption by as much as 80 percent per user over traditional PCs.

Data protection and backup

To address the problem of mobile workers -- now estimated at 25 percent of the workforce -- potentially losing company data, HP is offering HP Data Protector Notebook extension, which can back up and recover data outside the corporate network, even while the worker is working remotely and offline.

With the Data Protector, data is instantly captured and backed up automatically each time a user changes, creates or receives a files. The data is then stored temporarily in a local repository pending transfer to the network data vault for full backup and restore capabilities. With single-click recovery, users can recover their own files without initiating help desks calls.

De-duplication, data encryption, and compression techniques help to maximize bandwidth efficiency and ensure security. The user’s storage footprint is reduced by deduplication of multiple copies of data. All of the user’s data is then stored encrypted and compressed and the expired versions are cleaned up.

HP introduced HP Backup and Recovery Fast Track Services, a suite of scalable service engagements that help ensure a successful implementation of HP Data Protector and HP Data Protector Notebook Extension.

Workshops and services

To help companies chart their way to client virtualization, HP is also offering a series of workshops and services:
  • The Transformation Experience Workshop is a one-day intensive session to help customers build their strategy for virtualized solutions, identify a high-level roadmap, and get executive consensus.

  • The Business Benefit Workshop allows customers to identify, quantify and analyze the business benefits of client virtualization, as well as set return-on-investment targets prior to entering the planning stage.

  • An Enhanced HP Solution Architecture and Pilot Service ensures the successful integration of the client virtualization solution into the customer’s infrastructure through a clear roadmap, architectural blueprint, and phased implementation strategy.
Products that are currently available include the t5740 Flexible Series Thin Client, $429; the t5745 Flexible Series Thin Client, $399; and is currently available, the LD4700 47-inch Widescreen LCD Digital Signage, starting at $1,799; and the ProLiant WS460c G6 Blade Workstation, starting at $3,044.

The t5325 Essential Series Thin Client starts at $199 and is expected to be available Dec. 1.

Elastra beefs up automation offering for enterprise cloud computing

Elastra Corp., which provides application infrastructure automation, has upped the ante with the announcement this week of Elastra Cloud Server (ECS) 2.0 Enterprise Edition. The new addition from the San Francisco company will help IT organizations leverage the economics of cloud computing, while preserving existing architectural practices and corporate policies.

Relying on an increased level of automation, the enterprise edition:
  • Automatically generates deployment plans and provisions sophisticated systems that are optimized to minimize operational and capital expenses. At the same time, applications are deployed to be compliant with the customers’ own sets of policies, procedures, and service level agreements (SLAs).

  • Cuts the lead times IT needs to create complex development, testing, and production environments by automating the processes traditionally managed by hand or via hand-crafted scripts.

  • Lets IT organizations maintain control of their operations using familiar tools and technologies while delivering on-demand, self-service system provisioning to their users.
The beta program for the enterprise edition of Elastra Cloud Server involved customers from a variety of industries including: a large European telecommunications company, a leading US federal government systems integrator, and a major IT services and outsourcing company.

Elastra offers a free edition of ECS running on Amazon Web Services and an enterprise edition for private data centers.

I was impressed with Elastra when I was initially briefed in 2007. They have many of the right features for what the cloud market will demand. More data centers will be deploying "private cloud" attributes, and those will become yet larger portions of modern data centers.

Monday, November 16, 2009

BriefingsDirect analysts discuss business commerce clouds: Wave of the future or old wine in a new bottle?

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript, or download a copy. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 46. Our topic for this episode of BriefingsDirect Analyst Insights Edition centers on "business commerce clouds." As the general notion of cloud computing continues to permeate the collective IT imagination, an offshoot vision holds that multiple business-to-business (B2B) players could use the cloud approach to build extended business process ecosystems.

It's sort of like a marketplace in the cloud on steroids, on someone else's servers, perhaps to engage on someone's business objectives, and maybe even satisfy some customers along the way. It's really a way to make fluid markets adapt at Internet speed, at low cost, to business requirements, as they come and go.

I, for one, can imagine a dynamic, elastic, self-defining, and self-directing business-services environment that wells up around the needs of a business group or niche, and then subsides when lack of demand dictates. Here's an early example of how it works, in this case for food recall.

The concept of this business commerce cloud was solidified for me just a few weeks ago, when I spoke to Tim Minahan, chief marketing officer at Ariba. I've invited Tim to join us to delve into the concept, and the possible attractions, of business commerce clouds. We're also joined by this episode's IT industry analyst guests: Tony Baer, senior analyst at Ovum; Brad Shimmin, principal analyst at Current Analysis; Jason Bloomberg, managing partner at ZapThink; JP Morgenthal, independent analyst and IT consultant, and Sandy Kemsley, independent IT analyst and architect. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS, visual orchestration system, and through the support of TIBCO Software.

Here are some excerpts:
Minahan: When we talk about business commerce clouds, what we're talking about is leveraging the cloud architecture to go to the next level. When folks traditionally think of the cloud or technology, they think of managing their own business processes. But, as we know, if we are going to buy, sell, or manage cash, you need to do that with at least one, if not more, third parties.

The business commerce cloud leverages cloud computing to deliver three things. It delivers the business process application itself as a cloud-based or a software-as-a-service (SaaS)-based service. It delivers a community of enabled trading partners that can quickly be discovered, connected to, and enable collaboration with them.

And, the third part is around capabilities --the ability to dial up or dial down, whether it be expertise, resources, or other predefined best practice business processes -- all through the cloud.

... Along the way, what we [at Ariba] found was that we were connecting all these parties through a shared network that we call the Ariba Supplier Network. We realized we weren't just creating value for the buyers, but we were creating value for the sellers.

They were pushing us to develop new ways for them to create new business processes on the shared infrastructure -- things like supply chain financing, working capital management, and a simple way to discover each other and assess who their next trading partners may be.

... In the past year, companies have processed $120 billion worth of purchased transactions and invoices over this network. Now, they're looking at new ways to find new trading partners -- particularly as the incidence of business bankruptcies are up -- as well as extend to new collaborations, whether it be sharing inventory or helping to manage their cash flow.

Baer: I think there are some very interesting possibilities, and in certain ways this is very much an evolutionary development that began with the introduction of EDI 40 or 45 years ago.

Actually, if you take a took at supply-chain practices among some of the more innovative sectors, especially consumer electronics, where you deal with an industry that's very volatile both by technology and consumer taste, this whole idea of virtualizing the supply chain, where different partners take on greater and greater roles in enabling each other, is very much a direct follow on to all that.

Roughly 10 years ago, when we were going though the Internet 1.0 or the dot-com revolution, we started getting into these B2B online trading hubs with the idea that we could use the Internet to dynamically connect with business partners and discover them. Part of this really seemed to go against the trend of supply-chain practice over the previous 20 years, which was really more to consolidate on a known group of partners as opposed to spontaneously connecting with them.

Shimmin: ... I look at this as an enabler, in a positive way. What the cloud does is allow what Tim was hinting at -- with more spontaneity, self-assembly, and visibility into supply chains in particular -- that you didn't really get before with the kind of locked down approach we had with EDI.

That's why I think you see so many of those pure-play EDI vendors like GXS, Sterling, SEEBURGER, Inovis, etc. not just opening up to the Internet, but opening up to some of the more cloudy standards like cXML and the like, and really doing a better job of behaving like we in the 2009-2010 realm expect a supply chain to behave, which is something that is much more open and much more visible.

Kemsley: ... I think it has huge potential, but one of the issues that I see is that so many companies are afraid to start to open up, to use external services as part of their mission-critical businesses, even though there is no evidence that a cloud-based service is any less reliable than their internal services. It's just that the failures that happen in the cloud are so much more publicized than their internal failures that there is this illusion that things in the cloud are not as stable.

There are also security concerns as well. I have been at a number of business process management (BPM) conferences in the last month, since this is conference season, and that is a recurring theme. Some of the BPM vendors are putting their products in the cloud so that you can run your external business processes purely in the cloud, and obviously connect to cloud-based services from those.

A lot of companies still have many, many problems with that from a security standpoint, even though there is no evidence that that's any less secure than what they have internally. So, although I think there is a lot of potential there, there are still some significant cultural barriers to adopting this.

Minahan: ... The cloud provider, because of the economies of scale they have, oftentimes provides better security and can invest more in security -- partitioning, and the like -- than many enterprises can deliver themselves. It's not just security. It's the other aspects of your architectural performance.

Bloomberg: ... I am coming at it from a skeptic's perspective. It doesn’t sound like there's anything new here. ... We're using the word "cloud" now, and we were talking about "business webs." I remember business webs were all the rage back when Ariba had their first generation of offerings, as well as Commerce One and some of the other players in that space.

Age-old challenges

The challenges then are still the challenges now. Companies don't necessarily like doing business with other organizations that they don't have established relationships with. The value proposition of the central marketplaces has been hammered out now. If you want to use one, they're already out there and they're already matured. If you don't want to use one, putting the word "cloud" on it is not going to make it any more appealing.

Morgenthal: ... Putting additional information in the cloud and making value out of that add some overall value to the cost of the information or the cost of running the system, so you can derive a few things. But, ultimately, the same problems that are needed to drive a community working together, doing business together, exchanging product through an exchange are still there.

... What's being done through these environments is the exchange of money and goods. And, it's the overhead related to doing that, that makes this complex. RollStream is another startup in the area that's trying to make waves by simplifying the complexities around exchanging the partner agreements and doing the trading partner management using collaborative capabilities. Again, the real complexity is the business itself. It's not even the business processes. The data is there.

... Technology is a means to an end. The end that's got to get fixed here isn't an app fix. It's a community fix. It's a "how business gets done" fix. Those processes are not automated. Those are human tasks.

Minahan: ... As it applies to the cloud and the commerce cloud, what's interesting here is the new services that can be available. It's different. It's not just about discovering new trading partners. It's about creating efficiencies and more effective commerce processes with those trading partners.

I'll give you a good example. I mentioned before about the Ariba Network with $111 billion worth of transactions and invoices being transferred over this every year for the past 10 years. That gives us a lot of intelligence that new companies are coming on board.

An example would be The Receivables Exchange. Traditionally sellers, if they wanted to get their cash fast, could factor the receivables at $0.25 on the dollar. This organization recognized the value of the information that was being transacted over this network and was able to create an entirely new service.

They were able to mitigate the risk, and provide supply chain financing at a much lower basis -- somewhere between two to four percent by using the historical information on those trading relationships, as well as understanding the stability of the buyer.

What we're seeing with our customers is that the real benefits of the cloud come in three areas: productivity, agility, and innovation.



Because folks are in a shared infrastructure here that can be continually introduced, new services can be dialed up and dialed down. It's a lot different than a rigid EDI environment or just a discovery marketplace. ... What we're seeing with our customers is that the real benefits of the cloud come in three areas: productivity, agility, and innovation.

... When folks talk about cloud, they really think about the infrastructure, and what we are talking about here is a business service cloud.

Gartner calls it the business process utility, which ultimately is a form of technology-enabled business process outsourcing. It's not just the technology. The technology or the workflow is delivered in the cloud or as a web-based service, so there is no software, hardware, etc. for the trading partners to integrate, to deploy or maintain. That was the bane of EDI private VANs.

The second component is the community. Already having an established community of trading partners who are actually conducting business and transactions is key. I agree with the statement that it comes down to the humans and the companies having established agreements. But the point is that it can be built upon a large trading network that already exists.

The last part, which I think is missing here, and that's so interesting about the business commerce cloud, are the capabilities. It's the ability for either the solution provider or other third parties to deliver skills, expertise, and resources into the cloud as well as a web-based service.

It's also the information that can be garnered off the community to create new web-based services and capabilities that folks either don't have within their organization or don't have the ability or wherewithal to go out and develop and hire on their own. There is a big difference between cloud computing and these business service clouds that are growing.

Shimmin: ... The fuller picture is to look at this as a combination of [Apple App Store] and the Amazon marketplace. That's where I think you will see the most success with these commerce clouds -- a very specific community of like-minded suppliers and purchasers that want to get together and open their businesses up to one another.

... A community of companies wants to be able to come together affordably, so that the SMB can on-board an exchange at an affordable rate. That's really been the problem with most of these large-scale EDI solutions in the past. It's so expensive to bring on the smaller players that they can't play.

... When you have that sort of like-mindedness, you have the wherewithal to collaborate. But, the problem has always been finding the right people, getting to that knowledge that people have, and getting them to open it up. That's where the social networking side of this comes in. That's where I see the big EDI guns I was talking about and the more modernized renditions opening up to this whole Google Wave notion of what collaboration means in a social networking context.

That's one key area -- being able to have the collaboration and social networking during the modeling of the processes.



Minahan: ... We're seeing that already through the exchange that we have amongst our customers or around our solutions. We're also seeing that in a lot of the social networking communities that we participate in around the exchange of best practices. The ability to instantiate that into reusable workflows is something that's certainly coming.

Folks are always asking these days, "We hear a lot about this cloud. What business processes or technologies should we put in the cloud?" When you talk about that, the most likely ones are inter-enterprise, whether they be around commerce, talent management, or customer management, it's what happens between enterprises where a shared infrastructure makes the most sense.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript, or download a copy. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

ZapThink explores the four stages of SOA governance that lead to business agility

This guest post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

For several years now, ZapThink has spoken about SOA governance "in the narrow" vs. SOA governance" in the broad." SOA governance in the narrow refers to governance of the SOA initiative, and focuses primarily on the service lifecycle.

When vendors try to sell you SOA governance gear, they're typically talking about SOA governance in the narrow. SOA governance in the broad, in contrast, refers to IT governance in the SOA context. In other words, how will SOA help with IT governance (and by extension, corporate governance) once your SOA initiative is up and running?

In both our Licensed ZapThink Architect Boot Camp as well as our newer SOA and Cloud Governance Course, we also point out how governance typically involves human communication-centric activities like architecture reviews, human management, and people deciding to comply with policies. We point out this human context for governance to contrast it to the technology context that inevitably becomes the focus of SOA governance in the narrow. There is an important technology-centric SOA governance story to be told, of course, as long as it's placed into the greater governance context.

One question we haven't yet addressed in depth, however, is how these two contrasts -- narrow vs. broad, human vs. technology -- fit together. Taking a closer look, there's an important trend taking shape, as organizations mature their approach to SOA governance, and with it, the overall SOA effort. Following this trend to its natural conclusion highlights some important facts about SOA, and can help organizations understand where they want to end up as their SOA initiative reaches its highest levels of maturity.

Introducing the SOA governance grid

Whenever faced with to orthogonal contrasts, the obvious thing to do is put them in a grid. Let's see what we can learn from such a diagram:



The ZapThink SOA governance grid

First, let's take a look at what each square contains, starting with the lower left corner and moving clockwise, because as we'll see, that's the sequence that corresponds best to increasing levels of SOA maturity.


1. Human-centric SOA governance in the narrow

As organizations first look at SOA and the governance challenge it presents, they must decide how they want to handle various governance issues. They must set up a SOA governance board or other committee to make broad SOA policy decisions. We also recommend setting up a SOA Center of Excellence to coordinate such policies across the whole enterprise.

These policy decisions initially focus on how to address business requirements, how to assemble and coordinate the SOA team, and what the team will need to do as they ramp up the SOA effort. The output of such SOA governance activities tend to be written documents and plenty of conversations and meetings.

The tools architects use for this stage are primarily communication-centric, namely word processors and portals and the like. But this stage is also when the repository comes into play as a place to put many such design time artifacts, and also where architects configure design time workflows for the SOA team. Technology, however, plays only a supporting role in this stage.

2. Technology-centric SOA governance in the narrow

As the SOA effort ramps up, the focus naturally shifts to technology. Governance activities center on the registry/repository and the rest of the SOA governance gear. Architects roll up their sleeves and hammer out technology-centric policies, preferably in an XML format that the gear can understand. Representing certain policies as metadata enables automated communication and enforcement of those policies, and also makes it more straightforward to change those policies over time.

This stage is also when run time SOA governance begins. Certain policies must be enforced at run time, either within the underlying runtime environment, in the management tool, or in the security infrastructure. At this point the SOA registry becomes a central governance tool, because it provides a single discovery point for run time policies. Tool-based interoperability also rises to the fore, as WS-I compliance, as well as compliance with the Governance Interoperability Framework or the CentraSite Community become essential governance policies.

3. Technology-centric SOA governance in the broad

The SOA implementation is up and running. There are a number of services in production, and their lifecycle is fully governed through hard work and proper architectural planning. Taking the SOA approach to responding to new business requirements is becoming the norm. So, when new requirements mean new policies, it's possible to represent some of them as metadata as well, even though the policies aren't specific to SOA.

Such policies are still technology-centric, for example, security policies or data governance policies or the like. Fortunately, the SOA governance infrastructure is up to the task of managing, communicating, and coordinating the enforcement of such policies. By leveraging SOA, it's possible to centralize policy creation and communication, even for policies that aren't SOA-specific.

Sometimes, in fact, new governance requirements can best be met with new services. For example, a new regulatory requirement might lead to a new message auditing policy. Why not build a service to take care of that? This example highlights what we mean by SOA governance in the broad. SOA is in place, so when a new governance requirement comes over the wall, we naturally leverage SOA to meet that requirement.

4. Human-centric SOA governance in the broad

This final stage is the most thought-provoking of all, because it represents the highest maturity level. How can SOA help with the human activities that form the larger picture of governance in the organization? Clearly, XML representations of technical policies aren't the answer here. Rather, it's how implementing SOA helps expand the governance role architecture plays in the organization. It's a core best practice that architecture should drive IT governance. When the organization has adopted SOA, then SOA helps to inform best practices for IT governance overall.

The impact of SOA on enterprise architecture (EA) is also quite significant. Now that EAs increasingly realize that SOA is a style of EA, EA governance is becoming increasingly service-orientated in form as well. It is at this stage that part of the SOA governance value-proposition benefits the business directly, by formalizing how the enterprise represents capabilities consistent with the priorities of the organization.

The ZapThink take

T
he big win to moving to the fourth stage is in how leveraging SOA approaches to formalize EA governance impacts the organization's business agility requirement. In some ways business agility is like any other business requirement, in that proper business analysis can delineate the requirement to the point that the technology team can deliver it, the quality team can test for it, and the infrastructure can enforce it. But as we've written before, as an emergent property of the implementation, business agility is a different sort of requirement from more traditional business requirements in a fundamental way.

A critical part of achieving this business agility over time is to break down the business agility requirement into a set of policies, and then establish, communicate, and enforce those policies -- in other words, provide business agility governance. Only now, we're not talking about technology at all. We're talking about transforming how the organization leverages resources in a more agile manner by formalizing its approach to governance by following SOA best practices at the EA level. Organizations must understand the role SOA governance plays in achieving this long-term strategic vision for the enterprise.

This guest post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.