Thursday, December 17, 2009

Executive interview: HP's Robin Purohit on how CIOs can contain IT costs while spurring innovation payoffs

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

The latest BriefingDirect podcast delivers an executive interview with Robin Purohit, Vice President and General Manager for HP Software and Solutions.

I had the pleasure to recently sit down with Purohit to examine how CIOs are managing their IT budgets for 2010. During the economic recovery, the cost-containment conundrum of "do more for less" -- that is, while still supporting all of your business requirements -- is likely to remain the norm.

So this discussion centers on how CIOs are grappling with implementing the best methods for higher cost optimization in IT spending, while also seeking the means to improve innovation and business results. The interview coincides with HP's announcements this week at Software Universe in Germany on fast-tracks to safer cloud computing.

"Every CIO needs to be extremely prepared to defend their spend on what they are doing and to make sure they have a great operational cost structure that compares to the best in their industry," says Purohit.

The 25-minute interview is conducted by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Purohit: Well, just about every CIO I've talked to right now is in the middle of planning their next year’s budget. Actually, it's probably better to say preparing for the negotiation for next year’s budget. There are a couple of things.

The good news is that this budget cycle doesn’t look like last year’s. Last year’s was very tough, because the financial collapse really was a surprise to many companies, and it required people to very quickly constrain their capital spend, their OPEX spend, and just turn the taps off pretty quickly.

... [Now] they need to be able to prepare to make a few big bets, because the reality is that the smartest companies out there are using this downturn as an advantage to make some forward looking strategic bets. If you don't do that now, the chances are that, two years from now, your company could be in a pretty bad position.

... There are a couple of pretty important things to get done. The first is to have an extremely good view of the capital you have, and where it is in the capital cycle. Getting all of that information that is timely, accurate, and at your fingertips, so you can enter the planning cycle, is extraordinarily important and fundamental.

When you are going to deploy new capital, always make sure that it's going to be able to be maintained and sustained in the lowest-cost way. The way we phrase this is, "Today's innovation is tomorrow’s operating cost."

When you do refresh, there are some great new ways of actually using capital on server storage and networking that's at a much lower cost structure, and much easier to operate, than the systems we had three or four years ago.

In the past, we’ve seen mistakes made, where people deployed new capital without really thinking how they were going to drive the long-term cost structure down in operating that new capital.

This is where we really see an opportunity: To help customers put in place IT financial management solutions, which are not just planning tools -- not just understanding what you have -- but essentially a real-time financial analytic application that is timely and accurate as an enterprise resource planning (ERP) system, or a business intelligence (BI) system that's supporting the company’s business process.

New business agenda

Companies want to see the CIOs use capital to support the most important business initiatives they have, and usually they are associated with revenue growth, by expanding the sales force, and new business units, some competitive program, or eventually a new ecommerce presence.

It's imperative that the CIO shows as much as possible that they're applying capital to things that clearly align with driving one of those new business agendas that's going to help the company over the next three years.

Now, in terms of how you do that, it's making sure that the capital spend that you have, that everything in the data center you have, is supporting a top business priority. It's the most important thing you can do.

One thing that won't change is that demand from the business will all of a sudden strip your supply of capital and labor. What you can do is make sure that every person you have, every piece of equipment you have, every decision you are making, is in the context of something that is supporting an immediate business need or a key element of business operation.

It also means there are more things and more new things to manage.



There are lots of opportunities to be disciplined in assessing your organization, both in how you spend capital, how you use your capital, and what your people are working on. I wouldn't call it waste, but I would call it just a better discipline and whether what you're doing truly is business critical or not.

If you don't get the people and process right, then new technologies, like virtualization or blade systems, are just going to cause more headaches downstream, because those things are fantastic ways of saving capital today. Those are the latest and greatest technologies. Four or five years ago, it was Linux and Windows Server.

It also means there are more things and more new things to manage. If you don't have extremely disciplined processes that are automated, and if you don't have all of your team with one play book on what those processes are, and making sure that there is a collaborative way for them to work on those processes, and which is as automated as possible, your operating costs are just going to increase as you embrace the new technologies that lower your capital. You've got to do both at the same time.

Say that you're a new CIO coming to organization and you see a lack of standardization, a lack of centers of excellence, and a lot of growth through merger and acquisition, there is a ton of opportunity to take out operating cost.

The right governance


We've seen customers generally take out 5 to 10 percent, when a new CIO comes on board, rationalizes everything that's being done, and introduces rigorous standardization. That's a quick win, but it's really there for companies that have been probably a little earlier in the maturity cycle of how they run IT.

A couple of new things that are possible now with the outsourcing model and the cloud model -- whether you want to call it cloud or software as a service (SaaS) -- is that there's an incredibly rich marketplace of boutique service shops and boutique technology providers that can provide you either knowledge or technology services on-demand for a particular part of your IT organization.

The cost structures associated with running infrastructure as a service (IaaS) are so dramatically lower and are very compelling, so if you can find a trusted provider for that, cloud computing allows you to move at least markets that are lower risk to experiment with those kind of new techniques.

The other nice thing we like about cloud computing is that there is at least a perception that is going to be pretty nimble, which means that you'll be able to move services in and out of your firewall, depending on where the need is, or how much demand you have.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

You may also be interested in:

Wednesday, December 16, 2009

Early thoughts on IBM buying Lombardi: Keep it simple

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

This has been quite a busy day, having seen IBM’s announcement come over the wire barely after the alarm went off. Lombardi has always been the little business process management (BPM) company that could.

In contrast to rivals like Pegasystems, which has a very complex, rule-driven approach, Lombardi’s approach has always been characterized by simplicity. In that sense, its approach mimicked that of Fuego before it was acquired by BEA, which of course was eventually swallowed up by Oracle.

We echo Sandy Kemsley’s thoughts of letdown about hopes for a Lombardi IPO. But even had the IPO been done, that would have postponed the inevitable. We agree with her that if IBM is doing this acquisition anyway, it makes sense to make Lombardi a first-class citizen within the IBM WebSphere unit.

Not surprisingly, IBM is viewing Lombardi for its simplicity. At first glance, it appears that Lombardi Teamworks, their flagship product, overlaps WebSphere BPM. Look under the hood, and WebSphere BPM is not a single engine, but the product of several acquisitions and internal development, including the document-oriented processes of FileNet and the application integration processes from Crossworlds.

So in fact Lombardi is another leg of the stool, and one that is considerably simpler than what IBM already has. In fact, this is vey similar to how Oracle has positioned the old Fuego product alongside its enterprise BPM offering which is build around IDS Scheer’s ARIS modeling language and tooling.

IBM’s strategy is that Lombardi provides a good way to open the BPM discussion at department level. But significantly on today's announcement call, IBM stated that once the customer wants to scale up, that it would move the discussion to its existing enterprise-scale BPM technology. It provided an example of a joint engagement at Ford -– where Lombardi works with the engineering department, while IBM works at the B2B trading partner integration level -- as an example of how the two pieces would be positioned going forward.

The challenge for IBM is preserving the simplicity of Lombardi products, which tend to be more department oriented bottom-up, vs. the IBM offerings that are enterprise-scale and top-down.



James Governor of RedMonk had a very interesting suggestion that IBM could leverage the Lombardi technologies atop some of its Lotus collaboration tools. We also see good potential synergies with the vertical industry frameworks as well.

The challenge for IBM is preserving the simplicity of Lombardi products, which tend to be more department-0oriented and bottom-up vs. the IBM offerings that are enterprise-scale and top-down. Craig Hayman, general manager of the application and integration middleware (WebSphere) division, admitted on the announcement call that IBM has “struggled” in departmental, human-centric applications. In part that is due to IBM’s top-down enterprise focus, and also the fact that all too often, IBM’s software is known more for richness than ease of use.

A good barometer of how IBM handles the Lombardi integration will be reflected on how it handles Lombardi Blueprint and IBM WebSphere BlueWorks BPM. Blueprint is a wonderfully simple process definition hosted service while BlueWorks is also hosted, but is far more complex with heavy strains of social computing.

We have tried Blueprint and found it to be a very straightforward offering that simply codifies your processes, generating Word or PowerPoint documentation, and BPMN models. The cool thing is that if you use it only for documentation, you have gotten good value out of it – and in fact roughly 80 percent of Blueprint customers simply use it for that.

On today's call, Hayman said that IBM plans to converge both products. That's a logical move. But please, please, please, don’t screw up the simplicity of Blueprint. If necessary, make it a stripped down face of BlueWorks.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

New HP offerings enable telcos to deliver more safe cloud services fast

Hewlett-Packard (HP) has significantly elevated its efforts to become an indispensable full-service supplier to cloud computing aspirants, especially telecommunications, mobile and Internet service providers.

At Software Universe in Hamburg, Germany, HP today announced three new offerings designed to enable cloud providers and enterprises to securely lower barriers to adoption and accelerate the time-to-benefit of cloud-delivered services. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Timing here is critical. As the end users of cloud services seek flexible infrastructure, IP voice, unified communications and call center automation, cloud providers need a fast-track to such low-risk cloud capabilities. HP is also wasting no time as it competes yet more broadly against Cisco Systems in the race to become mainstream means to cloud services.

Among the new offerings:
  • HP Operations Orchestration, which will automate the provisioning of services within the existing infrastructure, allowing businesses to seamlessly increase capacity through integration with such things as Amazon Elastic Compute Cloud. Look for other public cloud providers to offer this as well.

  • HP Communication as a service (CaaS), a cloud program that will enable service providers to offer small and mid-size businesses services delivered on an outsourced basis with utility pricing. CaaS includes an aggregation platform, four integrated communications services from HP and third parties, as well as the flexibility to offer other on-demand services.

  • HP Cloud Assure for Cost Control, designed to help companies optimize cloud costs and gain predictability in budgeting by ensuring that they right-size their compute footprints.
Cloud Assure was introduced by HP last Spring, and today's announcement moves it to the next level. Neil Ashizawa, manager of HP's Software-as-a-Service (SaaS) Products and Cloud Solutions, recently spoke with me about Cloud Assure for cost control. He told me:
"When we first launched Cloud Assure earlier this year, we focused on the top three inhibitors, which were security of applications in the cloud, performance of applications in the cloud, and availability of applications in the cloud. We wanted to provide assurance to enterprises that their applications will be secure, they will perform, and they will be available when they are running in the cloud.

"The new enhancement that we're announcing now is assurance for cost control in the cloud. Oftentimes enterprises do make that step to the cloud, and a big reason is that they want to reap the benefits of the cost promise of the cloud, which is to lower cost."
He then explained how Cloud Assure for cost control works:
"Cloud Assure for cost control solution comprises both HP Software and HP Services provided by HP SaaS. The software itself is three products that make up the overall solution.
  • "The first one is our industry-leading Performance Center software, which allows you to drive load in an elastic manner. You can scale up the load to very high demands and scale back load to very low demand, and this is where you get your elasticity planning framework.

  • "The second solution from a software’s perspective is HP SiteScope, which allows you to monitor the resource consumption of your application in the cloud. Therefore, you understand when compute resources are spiking or when you have more capacity to drive even more load.

  • "The third software portion is HP Diagnostics, which allows you to measure the performance of your code. You can measure how your methods are performing, how your SQL statements are performing, and if you have memory leakage."
These HP-driven means to attain cloud benefits sooner than later come in response to recent surveys in which industry executives clearly stated a need for more flexible computing options in the face of uncertain economic times. They want to be able to dial up and down their delivery of services, but without the time and cost of building out the capital-intensive traditional delivery models.

The market is also looking to cloud services -- be they on-premises, from third-parties or both -- to provide:
  • Elasticity – the ability to rapidly respond to changing business needs with automated provisioning of cloud and physical services
  • Cost control – by optimizing efficiency and gaining predictability of costs by ensuring cloud compute resources are “right sized” to support fluctuating business demands
  • Risk reduction – through automated service provisioning that reduces manual errors, non-compliance snafus, and downtime of business services and processes.
I think HP has correctly identified a weakness in the SaaS and cloud markets. In many cases, applications and productivity services came to market first, but lacked the enterprise-caliber infrastructure, management, and auditing and fiscal control mechanisms. Now, HP is bringing these traditional IT requirements to the cloud domains, and making them available to the large market of existing providers.

The cloud horse is now in front of the cart, which means the providers can do their jobs better, and more end users can adopt secure cloud services in ways that reassure their mangers and adhere to their governance policies.

BriefingsDirect contributor Carlton Vogt provided editorial assistance and research on this post.


You may also be interested in:

Tuesday, December 8, 2009

Fujitsu ascends to new cloud offerings, expands data center to cover enterprises and ISVs

Many companies are intrigued by the potential cost savings and agility promised by cloud computing, but a lot of those are unsure about how to get in and when, as well as how. Fujitsu is rising to the occasion with end-to-end cloud services designed to help both enterprises and independent software vendors (ISVs).

Fujitsu says its new solution will allow companies migrate existing multi-platform and multi-vendor mission-critical systems to enterprise clouds. The benefit of this is that it will remove capital-intensive investments in technology and replace them with a pay-as-you-go strategy.

Scheduled for launch in the first quarter of 2010, the Fujitsu services have already attracted several ISVs, who plan to offer their own services to clients, using a software-as-a-service (SaaS) model. To accommodate the move, Fujitsu has upgraded its Sunnyvale, Calif. data center to the Tier III level and will support the cloud application programming interface (API).

Designed for enterprises in manufacturing, finance, healthcare, retail and other compute- and data-intensive industries, Fujitsu's cloud solutions include system construction, operations, maintenance services and full-featured vertical applications. In order to comply with vertical industry standards and regulations, retail transactional applications will be hosted in a payment-card industry (PCI) compliant data center and health care applications will be hosted in a health insurance portability and accountability act (HIPAA) compliant environment.

Going green

In addition, the multi-million dollar data-center upgrade and expansion will more than double available raised floor space, reduce carbon emissions by 21 percent, and increase available power and cooling capabilities that will dramatically expand the data center’s effective capacity by over 800 percent.

The redesign leverages technology from Fujitsu, including its PalmSecure palm vein recognition technology for physical access control, Fujitsu 10-gigabit switch technology for core backbone fabric, and Fujitsu PRIMERGY server and ETERNUS storage technologies. Sunnyvale will join other premier Fujitsu Tier-III+ and Tier IV facilities in the Americas, including Dallas, Montreal and Trinidad, in delivering high-availability IT solutions.

. . . Upgrade and expansion will . . . reduce carbon emissions by 21 percent, and increase available power and cooling capabilities that will dramatically expand the data center’s effective capacity by over 800 percent.



Fujitsu recently announced enhancements to its Interstage Cloud business process management (BPM) service, which will be migrated to the new secure cloud platform as soon as it is available.

The goal of the cloud API submitted by Fujitsu to the Open Cloud Standards Incubator of the Distributed Management Task Force (DMTF) is to maintain interoperability among various cloud computing environments, so clients don't need to worry about vendor lock-in when adopting a particular cloud computing platform. Fujitsu plans to actively participate in the standardization process of the DMTF and aims to implement the API as part of its next-generation infrastructure-as-a-service (IaaS) platform.

Among the first ISVs to take advantage of the new cloud services offerings are CoolRock Software, an ISV specializing in email management software for archiving, ediscovery and collaboration, and Intershop Communications, a leading ecommerce solutions ISV.

You may also be interested in:

Monday, December 7, 2009

TIBCO borrows a Twitter page to bring better information to enterprise workers

TIBCO Software will release in 2010 software that lets people search for and then track corporate information by subject matter in a similar way to how they might follow people on Twitter.

This is a clear sign that the enterprise software and social software worlds are munging. Get ready to see a lot more.

The idea behind the tibbr – the name an obvious play on “Twitter” -- helps people find information related to their particular tasks and jobs quickly and easily by searching for information based on its subject matter, and then subscribing to relevant feeds on those topics, the company said. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]

Lack of information isn’t the main problem for enterprise systems these days, what's really needed is a useful interface and method for getting to the precise needed information quickly and easily to help business workers do their jobs more efficiently. By taking a page out of the social networking playbook, TIBCO aims to let people access corporate information via a Twitter-like "update." The result: workers can find the information they need faster, so, in theory, they perform with far higher productivity.

In an interview with All Things D’s Ben Worthen, TIBCO CEO Vivek Ranadive said he got the idea for tibbr when reading -- what else? –Twitter. More specifically, he said the inspiration came while he read updates to the micro-blogging service made by NBA basketball player Shaquille O’Neal.

With people spending – or arguably wasting -- so much time on social-networking applications outside of their everyday work tasks, companies have been looking for ways to apply social-networking technologies like real-time collaboration, status updates and Web presence information inside the firewall. TIBCO obviously sees tibbr as one way to do it.

I expect we'll see more ways that the social wall interface makes it's way into the business IT domain. This interface could easily replace the email in-box as the place workers tend to "live" during their jobs. Google Wave clearly also sees this as a good fit.

And, of course, no one "wall" will do. We should also expect an aggregation of walls that will follow us, and also adapt in terms of what takes priority on the personalized wall -- automated via policies -- based on what we are doing. Or where we are doing it. Or both.

As TIBCO describes tibbr, it will let people set “subjects” that represent a user, an application or a process relevant to what tasks or functions someone performs in an organization. Through tibbr, they can subscribe to feeds by category – for example, Finance or Accounts Payable -- for specific information they think will be relevant to their jobs.

Tibbr is based on Silver, TIBCO’s own cloud-computing infrastructure platform. TIBCO unveiled Silver earlier this year as a rapid-application development and delivery system for companies that want to deploy cloud computing but are unsure how to get started.

The company also is pushing tibbr’s foundation on open standards as an advantage for companies that want to integrate it with other applications so it can become a part of someone’s daily workflow.

TIBCO plans to test tibbr out on its own employees beginning on Dec. 14 before rolling it out to customers in early 2010.

BriefingsDirect contributor Elizabeth Montalbano provided editorial assistance and research on this post.

Wednesday, December 2, 2009

BriefingsDirect analysts unpack the psychology of project management via 'Pragmatic Enterprise 2.0' and SOA

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or  download a copy. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 47. Our topic this week on BriefingsDirect Analyst Insights Edition centers on how to define, track and influence how people actually adapt to and adopt technology.

Any new information technology might be the best thing since sliced bread, but if people don’t understand the value or how to access it properly -- or if adoption is spotty, or held up by sub-groups, agendas, or politics -- then the value proposition is left in the dust. Perceptions count ... a lot.

A crucial element for avoiding and overcoming social and user dissonance with technology adoption is to know what you are up against, in detail. Yet, data and inferences on how people really feel about technology is often missing, incomplete, or inaccurate.

In this discussion, we hear from two partners who are working to solve this issue pragmatically. First, with regard to Enterprise 2.0 technologies and approaches. And, if my hunch is right, it could very well apply to service-oriented architecture (SOA) adoption as well.

I suppose you could think of this as a pragmatic approach to developing business intelligence (BI) values for people’s perceptions and their ongoing habits as they adopt technology in a business context.

So please join Michael Krigsman, president and CEO of Asuret, as well as Dion Hinchcliffe, founder and chief technology officer at Hinchcliffe & Co. to explain how Pragmatic Enterprise 2.0 works. Our panel also includes Joe McKendrick, prolific blogger and analyst;  Miko Matsumura, vice president and chief strategist at Software AG; Ron Schmelzer, managing partner at ZapThink;  Tony Baer, senior analyst at Ovum; Sandy Rogers, independent industry analyst, and Jim Kobielus, senior analyst at Forrester Research.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS, visual orchestration system, and through the support of TIBCO Software.

And the discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts: 
Hinchcliffe: ... As many of you know, we've been spending a lot of time over the last few years talking about how things like Web 2.0 and social software are moving beyond just what’s happening in the consumer space, and are beginning to really impact the way that we run our businesses.

More and more organizations are using social software, whether this is consumer tools or specific enterprise-class tools, to change the way they work. At my organization, we've been working with large companies for a number of years trying to help them get there.

This is the classic technology problem. Technology improves and gets better exponentially, but we, as organizations and as people, improve incrementally. So, there is a growing gap between what’s possible and what the technology can do, and what we are ready to do as organizations.

I've been helping organizations improve their businesses with things like Enterprise 2.0, which is social collaboration, using these tools, but with an enterprise twist. There are things like security, and other important business issues that are being addressed.
Businesses are about collaboration, team work, and people working together . . .

But, I never had a way of dealing with the whole picture. We find that that folks need a deep introduction to what the implications are when you have globally visible persistent collaboration using these very social models and the implications of the business.

Michael Krigsman, of course, is famous for his work in IT project risk -- what it takes for projects to succeed and what causes them not to succeed. I saw this as the last leg of the stool for a complete way of delivering these very new, very foreign models, yet highly relevant models, to the way that organizations run their business.

Businesses are about collaboration, team work, and people working together, but we have used things like email, and models that people trust a lot more than these new tools.

There is usually a lot of confusion and uncertainty about what’s really taking place and what the expectations are. And Michael, with Asuret, brings something to the table. When we package it as a service that essentially brings these new capabilities, these new technologies and approaches, it manages the uncertainty about what the expectations are and what people are doing.

Krigsman: Think about business transformation projects -- any type. This can be any major IT project, or any other type of business project as well. What goes wrong? If we are talking about IT, it's very tempting to say that the technology somehow screws up. If you have a major IT failure, a project is late, the project is over budget, or the project doesn’t meet expectations or plan, it's extremely easy to point the finger at the software vendor and say, "Well, the software screwed up."

If we look a little bit deeper, we often find the underlying drivers of the project that is not achieving its results. The underlying drivers tend to be things like mismatched expectations between different groups or organizations.

For example, the IT organization has a particular set of goals, objectives, restrictions, and so forth, through which they view the project. The line of business, on the other hand, has its own set of business objectives. Very often, even the language between these two groups is simply not the same.

As another example, we might say that the customer has a particular set of objectives and the systems integrator has its own objectives for the particular project. The customer wants to get it done as fast and as inexpensively as possible. The systems integrator is often -- and I shouldn’t make generalizations, but -- interested in maximizing their own revenue.

If we look inside each of these groups, we find that inside the groups you have divisions as well, and these are the expectation mismatches that Dion was referring to.

If we look at IT projects or any type of business transformation project, what’s the common denominator? It's the human element. The difficulty is how you measure, examine, and pull out of a project these expectations around the table. Different groups have different key performance indicators (KPIs), different measures of success, and so forth, which create these various expectations.

Amplifying weak signals

How do you pull that out, detect it inside the project, and then amplify what we might call these weak signals? The information is there. The information exists among the participants in the project. How do you then amplify that information, package it, and present it in a way that it can be shared among the various decision-makers, so that they have a more systematic set of inputs for making decisions consistently based on data, rather than anecdote? That’s the common thread.

... We're not selling software. We offer a service, and the service provides certain results. However, we've developed software, tools, methods, techniques, and processes that enable us to go through this process behind the scenes very efficiently and very rapidly.

Rogers: What we discovered in our studies is that one of the fundamental needs in running any type of business project -- an SOA project or an Enterprise 2.0 IT project -- is the ability to share information and expose that visibility to all parties at levels that will resonate with what matters to them the most, but also bring them outside of their own domain to understand where dependencies exist and how one individual or one system can impact another.

One of the keys, however, is understanding that the measurements and the information need to get past system-level elements. If you design your services around what business elements are there and what matters to the business, then you can get past that IT-oriented view in bringing business stakeholders in aligned management and business goals to what transpires in the project.

Any way that you can get out -- web-based, easy-access dashboard with information -- and measure that regularly, you can allow that to proliferate through the organization. Having that awareness can help build trust, and that’s critical for these projects.

Baer: What Dion and Michael are talking about is an excellent idea in terms of that, in any type of environment where there is a lack of communication and trust, data is essential to really steer things. Data, and also assurances with risk management and protection of IT and all that. But, the fact is that there are some real clear hurdles.

An example is this project that my wife is working on at the moment. She was brought in as a consultant to a consulting firm that's working for the client, and each of them have very different interests. This is actually in a healthcare-related situation. They're trying to do some sort of compliance effort, and whoever was the fount of wisdom there postponed the most complex part of this problem to the very end. At the very end, they basically did a Hail Mary pass bringing a few more bodies.

They didn't look for domain expertise or anything. Essentially it's like having eight women be pregnant and having them give birth to a baby in a month. That's essentially the push they are doing.

On top of that, there is also a fear among each tribe of the other coming up with a solution that makes the other tribes look bad. So, I can't tell exactly the feedback from this, but I do know that my wife came in as a process expert. She had a pretty clear view on how to untie the bottlenecks.

Krigsman: We gather a lot of data. The essential elements have been identified during this conversation. ... It's absolutely accurate to look at this tribally. Tony spoke about tribal divisions and the social tribal challenges.

The fundamental trick is how to convert this kind of trust information. Jim was talking about collaborative project governance. All of this relates to the fact that you've got various groups of people. They have their own issues, their own KPIs, and so forth. How do you service issues that could impact trust and then convert that to a form that can then be examined dispassionately. I'd love to use the word "objectively," but we all know that being objective is a goal and it's never outcome that you can ultimately reach.

At least you have a way to systematically and consistently have metrics that you can compare. And then ... when you want to have a fight, at least you are fighting about KPIs, and you don't have people sitting in a conference room saying, "Well, my group thinks this. We believe the project 'blank.' If somebody says the same, my group thinks that." Well, let's have some common data that's collected across the various information silos and groups that we can then share and look at dispassionately.

Schmelzer: ... We think that the whole idea of project management is just an increasing fallacy in IT anyway. There is no such thing now. It's really a discrete project.

Can you really say that some enterprise software that you maybe buying or building yourself or maybe even sourcing as a service is really completely disconnected from all the other projects that you have going on or the other technology? The answer is, they are not.
The enterprise is a collection of many different IT projects, some of which are ongoing, some of which may have been perceived to be dead or no longer in development, or maybe some are in the future.

So, it's very hard to do something like discrete project management, where you have defined set of requirements and a defined timeline and defined budget, and make the assumption or the premise, which is false, that you're not going to be impacting any of the other concurrently running projects.

We think of this like a game of pick-up sticks. The enterprise is a collection of many different IT projects, some of which are ongoing, some of which may have been perceived to be dead or no longer in development, or maybe some are in the future. The idea that you could take any one of those little projects, and manipulate them without impacting the rest of the pile is clearly becoming false.

McKendrick: Michael and Dion, I think you're on the right track. In fact, it's all about organization. It's all about the way IT is organized within the company and, vice-versa, the way the company organizes the IT department. I’ll quote Mike Hammer, the consultant, not the detective, "Automate a mess and you get an automated mess." That's what's been happening with SOA.

Upper management either doesn't understand SOA or, if they do, it's bits and pieces -- do this, do that. They read Enterprise Magazine. The governance is haphazard, islands across the organization, tribal. Miko talks about this a lot in his talks about the tribal aspect. They have these silos and different interest groups conflicting.

There's a real issue with the way the whole process is managed. One thing I always say is that the organizations that seem to be getting SOA right, as Michael and Dion probably see with the Enterprise 2.0 world, are usually the companies that are pretty progressive. They have a pretty good management structure and they're able to push a lot of innovations through the organization.

Matsumura: ... This type of an approach really reflects the evolution of the best practice of adoption. Some of the themes that we've been talking about today around this sharing of information, communication, and collaboration, are really are essential for success.

I do want to caution just a little bit. People talk about complexity and they create a linkage between complexity and failure. It's more important to try to look at, first of all, the source of the problem. Complexity itself is not necessarily indicative of a problem. Sure, it's correlated, but ice-cream consumption is correlated with the murder rate, just as a function of when temperatures get hot, both things happen to increase. So complexity is also a measure of success and scale.

... The issue it comes down to for me is what Sandy said, which is that the word "trust," which is thrown in at the very end, turns out as extremely expensive. That alignment of organization and trust is actually a really important notion.

What happens with trust is that you can put things behind a service interface. Everything that's behind a service interface has suddenly gotten a lot less complex, because you're not looking at all that stuff. So, the reduction of complexity into manageability is completely dependent on this concept of trust and building it.

Kobielus: ... A dashboard is so important when you are driving a vehicle, and that's what a consolidated view of KPIs and metrics provides. They are a dashboard in the BI sense, and that's what this is, project intelligence dashboard for very complex project or mega programs that are linked projects. In other words, SOA in all of its manifestations.

In organization, you have to steer your enterprise in a different direction. You need obviously to bring together many projects and many teams across many business domains. They all need to have a common view of the company as a whole -- its operations, various stakeholders, their needs, and the responsibilities internally on various projects of various people. That's highly complex. So, it’s critical to have a dashboard that's not just a one-way conduit of metrics, from the various projects and systems.

In the BI world, which I cover, most of the vendors now are going like crazy to implement more collaboration and work-flow and more social community-style computing capabilities into their environments. It's not just critical to have everybody on the same page in terms of KPIs, but to have a sideband of communication and coordination to make sure that the organization is continuing to manage collectively according to KPIs and objectives that they all ostensibly agree upon.

Hinchcliffe: ... The way the process works is that we come into a client with an end-to-end service. Most organizations -- and this is going to be true of Enterprise 2.0 or SOA -- are looking at solving a problem. There's some reason why they think that this is going to help, but they're often not sure.
There are often a lot of unstated assumptions about how to apply technology to a business problem and what the outcome is going to be.

We start with this strategy piece that looks at the opportunity and tries to identify that for them and helps them correct the business case to understand what the return on investment (ROI) is going to be. To do that, you really have to understand what the needs of the organization are. So, one of the first things we do is bring Michael's process in, and we try and get ground truths.

There are often a lot of unstated assumptions about how to apply technology to a business problem and what the outcome is going to be. Particularly with SOA, you have so many borders that are typically involved. It's the whole concept around Conway's Law that the architecture tends to look back at the structure of the organization, because those are the boundaries in which everything runs.

One of the ways that we can assure that we have ground truth is by applying this dispassionate measurement process upfront to understand what people's expectations are, what their needs are, and what their concerns are. It's much more than just a risk-management approach. It's a way to get strategic project intelligence in a way that hasn't been possible before. We're really excited about it.

A lot of uncertainty

My specialty has always been focusing on emerging technology. There is always a lot of uncertainty, because people don't know necessarily what it is. They don't know what to expect. They have to have a way of understanding what that is, and you have an array of issues including the fact there are people who aren't willing to normally admit that they don't know things.

But, here is a way to safely and succinctly, on a regular basis, surface those issues and deal with them before they begin to have issues in the project. We then continue on through implementation and then regular assessments on the KPIs that can cause potential issues down the road. I think it's a valuable service. It's low impact, compared to another traditional interview process. This is something most organizations can afford to do on a regular basis.

Krigsman: ... I am so hesitant to use the term psychological, because it has so many connotations associated with it. But, the fact is that we spoke about perception earlier, and there has been a lot of discussion of trust and community and collaboration. All of these issues fundamentally relate to how people work together. These are the drivers of success, and especially the drivers of lack of success on projects of every kind.

It therefore follows that, if we want our projects to be governed well and to succeed, one way or another we have to touch and look at these issues. That’s precisely what we're doing with Asuret and it’s precisely the application that we have taken with Dion into Pragmatic Enterprise 2.0. You have to deal with these issues.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or  download a copy. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Upside case study report shows connections between BPM and security best practices

This guest post comes courtesy of David A. Kelly, principal analyst at Upside Research.

By David A. Kelly

Not only are today’s IT environments more complex than ever before, but the current economic climate is making it more difficult for IT organizations to easily and cost-effectively meet changing business requirements. What’s needed is a way for organizations to streamline business processes, increase efficiency, and empower business users -- rather than IT -- to be at the forefront of business-process change. In many cases, this is where a good business-process management (BPM) solution comes in.

As part of a project with Active Endpoints, Upside Research, Inc. recently interviewed a national government security organization that had a critical need to manage the security of files exchanged among users, screening out malware, malicious code, and viruses. [Disclosure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]

While the organization had identified appropriate anti-virus and security software, it needed a solution that could automate and manage the actual process of shepherding unknown files through a battery of security screenings, reporting on results, managing the state, and raising exceptions when a file needed to be investigated further.

Specifically, the organization needed to find a way to automate file and information sharing securely across a wide range of mobile users and to streamline security compliance efforts and ensure consistency. After considering multiple commercial and open-source solutions, the organization selected ActiveVOS from Active Endpoints.

Both the prototype and final solution took only a month to complete. The production version was completed in December 2008 and rolled out in 2009. Now, when files are being transferred in and out of the organization's network, the file-inspection process fires off in the background and the ActiveVOS process management solution takes over.

Multiple business rules

The ActiveVOS BPM solution passes each file, as determined by multiple business rules, through the appropriate filters and, if required, sends them to people. Once the filtering is complete, the results are reported back to ActiveVOS, which then takes the appropriate actions of sending an error message if it failed, or sending an approval if it passes. When a file passes through all the necessary filters, it is authorized for transfer and stored permanently on the file-sharing system.

ActiveVOS uses business process execution language (BPEL) and web services interfaces to integrate seamlessly with multiple commercial antivirus, security, and anti-malware programs. Because of the standards-based aspect of the solution, everything can be wrapped in a web service. The program then uses BPEL to route files to the necessary web services, as determined by business rules, and manages the security filtering process.

The resulting business benefits have already been significant, and the organization expects them to increase, as it expands the deployment footprint and use of the solution for automated news and information feeds.
The solution also reduced resolution time for blocked files by up to 60 percent and eliminated costly script writing, which has been replaced by automatically generated BPEL code.

Based on its interviews, Upside Research calculated the organization saw an 80 percent time reduction for changing business processing for each security policy update. The solution has also increased visibility to operators and security auditors, enabling them to track documents being transferred in and out of the agency networks in real time. The solution also reduced resolution time for blocked files by up to 60 percent and eliminated costly script writing, which has been replaced by automatically generated BPEL code.

Many companies considering process automation solutions can learn from this government agency’s experience. Instead of opting to go with an expensive, coding-heavy solution that would have taken more time to implement, and despite having in-house experts, the agency opted to try a new vendor and implement a solution that delivered flexibility and speed of implementation.

Too often, a company will continue to use a solution that may be comfortable, but is not optimal for a particular project. This is a good example of a company successfully breaking that habit.

The full report can be downloaded from the Active Endpoints web site.

This guest post comes courtesy of David A. Kelly, principal analyst at Upside Research.

You may also be interested in:

Monday, November 30, 2009

The more Oracle says MySQL not worth much, the more its actions say otherwise

As the purgatory of Oracle's under-review bid to buy Sun Microsystems for $7.4 billion drags on, it's worth basking in the darn-near sublime predicament Oracle has woven for itself.

Oracle has uncharacteristically found itself maneuvered (by its own actions) into a rare hubristic place where it's:
  • Footing the bill for the publicity advancement of its quarry ... MySQL is more famous than ever, along with its low-cost and open attributes.
  • Watching the value of its larger quarry, Sun Microsystems, dwindle by the day as users flee the SPARC universe in search of greener (and leaner) binary pastures.
  • Giving open source middleware a boost in general too as Oracle seems to saying that MySQL is worth hundreds of millions of dollars (dead or alive); the equivalent of what it's losing by not spinning MySQL out of the total Sun package.
  • Both denigrating and revering the fine attributes of the awesome MySQL code and community, leaving the other database makers happy to let Oracle pay for and do their dirty work of keeping MySQL under control.
This last point takes the cake. IBM, Microsoft and Sybase really don't want MySQL to take over with world, err ... Web, any time soon, either. But they also want to coddle the developers who may begin with MySQL and then hand off to the IT operators who may be inclined, err ... seduced, to specify a commercial RDB ... theirs ... for the life of the app.

So it's a delicate dance to profess love for MySQL while setting the snare to eventually tie those new apps to the costly RDBs and associated Java middleware (and hardware, if you can). Let's not also forget the budding lust for all things appliance by certain larger vendors (Oracle included).

If Oracle, by its admission to the EU antitrust mandarins, thinks MySQL has little market value and is not a direct competitor to its heavy-duty Oracle RDB arsenal, than why doesn't it just drop MySQL, by vowing to spin it out or sell it? Then the Sun deal would get the big rubber stamp.

It's because not of what MySQL is worth now, but what it may become. Oracle wants to prune the potential of MySQL while not seeming to do anything of the sort.

The irony is that Oracle has advanced MySQL, lost money in the process, and helped its competitors -- all at the same time. When Oracle buys Sun and controls MySQL the gift (other than to Microsoft SQL Server) keeps on giving as the existential threat to RDBs is managed by Redwood Shores.

And we thought Larry Ellison wasn't overly charitable.