Wednesday, October 21, 2009

Global study: Hybrid model rules as cloud heats up, SaaS adoption blazing

Cloud” is the game and “hybrid” is the name. A recent global study has encouraging news for cloud-computing enthusiasts, revealing a sharp uptick in the adoption, as well as consideration, of cloud computing. The same study also indicates that those who are adopting cloud aren’t going whole hog, but are taking a hybrid approach -- mixing external and internal clouds.

The study, commissioned by global IT consultancy Avanade, showed a surprising increase in the interest in cloud computing, even from a similar study conducted in January of this year. In January, 54 percent of respondents said they had no plans to adopt cloud computing. By September, that percentage had shrunk to 37 percent.

At the same time, the percentage of companies planning or testing cloud computing increased three-fold, going from 3 percent of respondents to 10 percent.

What’s significant in the report is that less than 5 percent of companies are using an all-cloud model. The rest are relying on a hybrid approach, and report security concerns as the chief factor for being cautious.

Nine months ago, 61 percent of respondents indicated that they were using only internal IT systems and today, that number has dropped to 41 percent. At the same time, those using a combined approach on a global level have increased to 54 percent from 33 percent nine months earlier.

The report says it not clear whether the hybrid model will lead to a pure-play adoption at some point.

SaaS is taking off

One aspect of cloud computing that’s finding wide adoption is software as a service (SaaS), with more than half of the respondents worldwide -- and 68 percent in the US -- reporting that they have adopted SaaS at some level. Despite extremely high satisfaction -- more than 90 percent -- reliability is still an issue. About 30 percent of respondents said they had lost more than a day of business due to a service outage.

Still, the reliability concerns haven’t dampened users’ enthusiasm for SaaS, and 62 percent of respondents reported that they had plans to move into more SaaS within the next year. However, similar to their experience with cloud, users tend to deliver SaaS applications internally, rather than from the third-party provider.

On a global basis, those who deliver SaaS application internally outnumber those who used a third party by a ratio of 2 to 1. In the US, that increases to 4 to 1. Also, those who do use SaaS often rely on multiple providers, with one third using three or more providers. This leads the report to conclude that there is opportunity in the SaaS market.

Other conclusion from the report:
  • Cloud will continue to make significant inroads for the next year, although there won’t be a migration to a full cloud environment.

  • The gap is closing between companies with plans to adopt and those without. Avenade sees those curves intersecting in 2011 or 2012.

  • Despite the widespread adoption of cloud, there will be some applications that should remain on-premises.

  • SaaS adoption will continue to spread and is spreading faster than other technologies have in the past.
The study was conducted by Kelton Research and surveyed 500 C-level and IT executives worldwide.

BriefingsDirect contributor Carlton Vogt provided editorial assistance and research on this post.

Here's why Apple is doing so well -- it's the top half, stupid

I've been ruminating the past few days on why Apple is doing so well with it's pricey high-end products and services during a recession. The answer came as I was reading today's New York Times column by Thomas Friedman, whom I deeply admire and read anything and everything he puts out.

Friedman points out that the winners in today's fast-shifting U.S. job market are the ones demonstrating "entrepreneurship, innovation and creativity." He says, "They are the new untouchables," in contrast to other still highly educated but less creative types.

Friedman cites Harvard University labor expert Lawrence Katz, who explains in the column that the now disadvantaged are "those engineers and programmers working on more routine tasks and not actively engaged in developing new ideas or recombining existing technologies or thinking about what new customers want. ... They’ve been much more exposed to global competitors that make them easily substitutable.”

They are also more likely to be using personal computers with nine-year-old operating systems, with little choice but to take what their companies provide in terms of personal productivity IT. They are the 90 percent for whom good enough IT has made them as good as anyone anywhere.

In contrast, it's the "top half" of the labor pool, and more specifically the apparent 10 percent that are "entrepreneurship, innovation and creativity"-focused among them, that know to succeed and win they need the very best computer and associated services, even if it costs $500 more. Nowadays there's no better way to gain an advantage in business and life than to have the best technology.

The people who are succeeding are buying Macs, iPhones, iPod Touches and Apple's services and applications. A flight to quality is usually spurred by disruption and uncertainty. It's not about brand religion or pretty graphics. It's about survival and success when the going gets tough. It works for me, it has to.

A chef doesn't buy the cheapest knifes. A painter doesn't buy the cheapest brushes. A carpenter doesn't buy the cheapest hammer. And all the winners in the economy today -- those that have a say in what they use to do all the digital things so critical now to almost any knowledge- and services-based job -- need the best tools. And they will upgrade those tools just as fast as they can (hence the rapid adoption of Apple's Snow Leopard OS X upgrade in recent months.)

So for all those millions of newly laid off workers who know that "entrepreneurship, innovation and creativity" is their only ticket to a new, fresh start -- those that no longer have an IT department to tell them what to do (at lowest cost) -- they seem to be making a new move to a Mac. I expect they won't soon go back, once they taste the fruits of heightened knowledge productivity.

Because when failure is not an option, you have to have the best tools, especially when the going gets tough. The sad part is that Apple does so well when so many are not.

Tuesday, October 20, 2009

SOA user survey defines latest ESB trends, middleware use patterns

Take the BriefingsDirect middleware/ESB survey now.

Forgive my harping on this, but I keep hearing about how powerful social media is for gathering insights from the IT communities and users. Yet I rarely see actual market research conducted via the social media milieu.

So now's the time to fully test the process. I'm hoping that you users and specifiers of enterprise software middleware, SOA infrastructure, integration middleware, and enterprise service buses (ESBs) will take 5 minutes and fill out my BriefingsDirect survey. We'll share the results via this blog in a few weeks.

We're seeking to uncover the latest trends in actual usage and perceptions around these SOA technologies -- both open source and commercial.

How middleware products -- like ESBs -- are used is not supposed to change rapidly. Enterprises typically choose and deploy integration software infrastructure slowly and deliberately, and they don't often change course without good reason.

But the last few years have proven an exception. Middleware products and brands have shifted more rapidly than ever before. Vendors have consolidated, product lines have merged. Users have had to grapple with new and dynamic requirements.

Open source offerings have swiftly matured, and in many cases advanced capabilities beyond the commercial space. Interest in SOA is now shared with anticipation of cloud computing approaches and needs.

So how do enterprise IT leaders and planners view the middleware and SOA landscape after a period of adjustment -- including the roughest global recession in more than 60 years?

This brief survey, distributed by BriefingsDirect for Interarbor Solutions, is designed to gauge the latest perceptions and patterns of use and updated requirements for middleware products and capabilities. Please take a few moments and share your preferences on enterprise middleware software. Thank you.

Take the BriefingsDirect middleware/ESB survey now.

Monday, October 19, 2009

Speaking of SOA: Are services nouns or verbs?

This guest post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

ZapThink revels in stirring up controversy almost as much as we enjoy clarifying subtle concepts that give architects that rare "aha!" moment as they finally discern the solution to a particularly knotty design problem. Last month's "process isomorphism" ZapFlash, therefore, gave us a particular thrill, because we received kudos from enterprise architects for streamlining the connections between Business Process Management (BPM) and Service-Oriented Architecture (SOA), while at the same time, several industry pundits demurred, disagreeing with our premise that services should correspond one-to-one with tasks or subtasks in a process.

Maybe we got it wrong, and inadvertently mislead our following of architects? Or perhaps the pundits were off base, and somehow ZapThink saw clearly a best practice that remained obscure to other experts in the field?

Upon further consideration, the true answer lies somewhere in between these extremes. Now, we're not reconsidering the conclusions of the process isomorphism ZapFlash. Rather, further explanation and clarification is warranted.

As with any best practice, process isomorphism doesn't apply in every situation, and not every service should correspond to a process task or subtask. That being said, there is also a good chance that some of our esteemed fellow pundits might not be opining from a truly service-oriented perspective, as many of their comments hint at an object-oriented (OO) bias that may be too limiting in the SOA context.

In fact, understanding which services the process isomorphism pattern applies to, and how other services support such services goes to the heart of how to think about services from a SOA perspective.

The object-oriented context for services

In the early days of web services, as various standards committee members tried to hash out how core standards should support the vision of SOA, the SOAP standard for message transport was an acronym for the "Simple Object Access Protocol." The reasoning at the time was that services were interfaces to objects, and hence service operations should correspond to object methods, also known as remote procedures.

SOAP was nothing more than a simple, XML-based way of access those methods. Over time, however, people realized that taking this Remote Procedure Call (RPC) approach to service interfaces is too limiting: It leads to tightly coupled, synchronous interactions that constrain the benefits such services could offer. Instead, the industry settled on document style as being the preferred interface style, which expects requests and responses to conform to schemas that are included in the service contracts by reference, where the underlying service logic is responsible for validating interactions against the relevant schemas.

Document style interfaces provide greater loose coupling than their RPC-style cousins because many changes to a service need not adversely impact existing service consumers, and furthermore, document style interfaces facilitate asynchronous interactions where a request need not correlate immediately with a response. In fact, the W3C eventually dropped the "Simple Object Access Protocol" definition of SOAP altogether, and now SOAP is just SOAP, instead of being an abbreviation of anything.

The answer is straightforward: If a service has no operations, then what it's supposed to do is understood from the context of the service itself.

However, document style interfaces still allow for operations, only now they're optional rather than mandatory as is the case with RPC-style interfaces. The fact that operations are optional is a never-ending sense of confusion for students in our Licensed ZapThink Architect course, perhaps because of the object-oriented pattern of thinking many of today's techies follow, often without realizing it.

How would you ever know what a service is supposed to do, the reasoning goes, if you don't call an operation on that service? The answer is straightforward: if a service has no operations, then what it's supposed to do is understood from the context of the service itself. For example, an insurance company may want a service that simply approves a pending insurance policy. If we have an approvePolicy Service, the consumer can simply request that service with the policy number of the policy it wants to approve.

Nouns vs. Verbs

The insurance policy example brings up a fundamental question. Which is the service, the insurance policy entity or the approve policy task? In other words, should services be nouns or verbs? It's possible to design services either way, as Entity Services, which predictably represent business entities, or as Task Services, that represent specific actions that implement some step in a process, in other words, verbs. Which approach is better?

If you look at the question of whether services should be nouns or verbs from the OO perspective, then services are little more than interfaces to objects, and hence it's best to think of services as nouns and their operations as the verbs. For example, following the OO approach, we might have an insurance policy object with several operations, including one that approves the policy, as the following pseudocode illustrates:

myPolicy = new Policy (); ... successOrFailure = myPolicy.approve ();

The first statement above instantiates a particular policy, while the second one approves it, and returns either success or failure.

Now, it is certainly possible to create a Policy Service as an Entity Service that has an approve operation that works more or less like the example above, with one fundamental difference: because services are fundamentally stateless, you don't instantiate them. Here, then, is pseudocode that represents how an Entity Service would tackle the same functionality:

request to create new policy, specifying create policy operation --> Policy Service --> response with policy number 12345
request to approve policy 12345, specifying approve policy operation --> Policy Service --> response with success or failure

Note that we're representing service interactions as input and output messages that contain documents, where in this case, the input documents specify operations. In this example, there is no object in the OO sense representing policy 12345 and maintaining the state information that indicates whether or not that particular policy is approved or not.

Instead, the underlying service implementation maintains the state information. There is only the one Policy Service, and it accepts requests in the form of XML documents and returns responses, also in the form of XML documents. If a request calls the create policy operation, then the Policy Service knows to create the policy, while a request that specifies the approve policy operation follows the same pattern.

Note that the fact that the Policy Service has a document style interface gives us two advantages: First, we can make certain changes to the service like adding new operations without adversely impacting existing consumers, and second, its stateless nature enables asynchronous interactions, where instead of returning success or failure of the approve request, perhaps, the service returns a simple acknowledgment of the request (or perhaps no response at all), and then notifies the consumer at some point in the future that the policy has been approved, either through a one-way notification event or possibly as a response to a further query.

Task services as verbs

While there is a significant role for Entity Services in SOA, it is important to break free from OO-centric thinking and consider other types of services as well that serve other purposes. In fact, there is another way of offering the same functionality as the Entity Service above where the Services represent verbs rather than nouns, what we call Task Services. Here is the pseudocode for this situation:

request to create new policy --> createNewPolicy Service --> response with policy number 12345
request to approve policy 12345 -- > approvePolicy Service --> response with success or failure

In this example, neither Task Service has any operations, but rather the functionality of each Service is understood from the context of the Service. After all, what would an approvePolicy Service do but approve policies? If you read the process isomorphism ZapFlash, the benefits of delivering capabilities as Task Services is clear. If you design each Task Service to represent tasks or subtasks in business processes, then it's possible to build a service-oriented business application (SOBA) that is isomorphic to the process it implements.

Combining entity and task services

A casual reading of the process isomorphism ZapFlash might lead you to think we were suggesting that all services should be Task Services. However, in spite of the fact that architects with OO backgrounds often rely too heavily on Entity Services, such services do play a critical role in most SOA implementations.

Remember that in the enterprise context, services expose existing, legacy capabilities and data that are typically scattered across different applications and data stores, limiting the enterprise's agility and leading to high integration maintenance costs, poor data quality, reduced customer value, and other ills all too familiar to anybody working within a large organization's IT department. SOA provides best practices for addressing such issues by abstracting such legacy capabilities in order to support flexible business processes.

Both Entity and Task Services help architects connect the dots between legacy capabilities on the one hand, and flexible process requirements on the other, as the figure below illustrates:

Process, task, and entity service layers

In the figure above, the bottom row contains Entity Services, which directly abstract underlying legacy capabilities. Above the Entity Services lie the Task Services, which may actually be abstractions of individual operations belonging to underlying Entity Services. The top layer contains Process Services, which are typically compositions of Task Services. In other words, Process Services are interfaces to SOBAs, and when those SOBAs are compositions of properly designed Task Services, they will exhibit process isomorphism.

The essential question for the architect is which capabilities to abstract in which service layer. Take for example the Address Change Task Service. Changing addresses is a common example of a particularly challenging task in many large organizations, because address information is typically maintained by different applications and data stores in a haphazard, inconsistent manner. To make matters worse, there may be addresses associated with customers, policies, or other business entities.

When architecting the Customer Entity Service, the core design principle is to pull together the various instances of customer-related information and functionality across the as-is legacy environment into a single, consolidated representation. Such a Service will likely have an update address operation, and the Customer Entity Service's logic will encapsulate whatever individual queries and API calls are necessary to properly update customers' addresses across all relevant systems.

The Address Change Task Service, then, abstracts the Customer Entity Service's update address operation, as well as whatever other address change operations other Entity Services might have. The Service logic behind this Task Service understands, for example, that insured properties in polices have addresses and customers have addresses, and these addresses are related in a particular way, but are by no means equivalent.

The ZapThink take

As is usually the case, architects have several options at their disposal, and knowing which option is appropriate often depends on the business problem, an example of the "right tool for the job" principle. If the business problem is process-centric, say, a need to streamline or optimize the policy issuance process, then implementing SOBAs as compositions of Task Services will facilitate process flexibility.

In other cases, the business problem is more information-centric than process-centric, for example, putting consolidated customer information on a call center rep's screen. In such instances the architect's focus may be on an Entity Service, because the rep is dealing with a particular customer and must be able to interact with that customer in a flexible way.

The big picture of the SOA architect's challenge, of course, is delivering agility in the face of heterogeneity. On the one hand, the IT shop contains a patchwork of legacy resources, and on the other hand, the business requires increasingly agile processes. Understanding which capabilities belong in Entity Services and which belong in Task Services is a critical part of the best practice approach to SOA.

This guest post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at

Friday, October 16, 2009

What's on your watch list? Forrester identifies 15 key technologies for enterprise architects

Riding the right -- or wrong -- technology wave can help -- or really, really hurt -- your business. Moving at the right time can be the critical factor between the two outcomes.

Yet new technologies come down the pike at alarming speed. Deciding which will fizzle and which will sizzle -- and when -- can be a daunting and ongoing task. What’s an enterprise architect to do?

Forrester Research has tried to sort things out with a new report, “The Top 15 Technology Trends EA Should Watch.” And, if even limiting the selection to 15 sounds like a lot to keep your eye on, Forrester has grouped them into five major “themes,” and has ranked the technologies by their impact, newness and complexity.

Calling “impact” the most important criterion, the report says this considers whether the technology will deliver new business capabilities or allow IT to improve business performance.

“Newness” comes in second because it’s likely that enterprises will have to gear up to learn new processes and the processes themselves are prone to rapid evolution. “Complexity” places other demands on the business, requiring more time to learn operations that are more complex than others.

The five themes identified by Forrester, along with their associated technologies, are:
  • Social computing in and around the enterprise

    • Collaboration platforms become people-centric
    • Customer community platforms integrate with business apps
    • Telepresence gains widespread use

  • Process-centric data and intelligence

  • Restructured IT services platforms

  • Agile and fit-to-purpose applications

    • Business rules processing moves to the mainstream
    • BPM will be Web 2.0-enabled
    • Policy-based SOA becomes predominant
    • Security will be data- and content-based

  • Mobile as the new desktop

    • Apps and business processes go mobile
    • Mobile networks and devices gain more power
The technologies range from real-time business intelligence (BI) with a very high impact, high newness and high complexity to data- and content-based security, which scored a medium in all three categories. I guess that'll keep my friend Jim Koblielus busy for some time.

Forrester limited the report to a three-year horizon for two reasons. First, it represents the planning horizon for most firms and, second, any technology that won’t have an effect in less than three years may be interesting, but it’s not actionable.

The report also says that we're entering a new phase of technology innovation. This analysis is based on Forrester’s finding that technology change goes through two waves. The first involves innovation and growth. This features a rapid evolution of the technology and rapid uptake by businesses. The second phase is refinement and redesign, in which technologies are only incrementally improved.

I hear a lot these day about "inflection points" in the IT market. I hear folks point to the hockey stick growth effect coming for netbooks/thin clients/desktop virtualization/Windows 7. I like to add the smartphones and Android-phones to that category too.

And even if the cloud is a slow burn, rather than hockey stick, the importance of business processes supported by services supported by all the old and new suspects is huge. I call the ability to refine and adapt business processes as the big productivity maker of the next decade --- supported by IT as services.

Perhaps the new Moore's Law is less about systems, and more about what people do with the services those systems enable. What do you think?

Incidentally, the full report is available for download from Forrester.

BriefingsDirect contributor Carlton Vogt provided editorial assistance and research on this post.

Thursday, October 15, 2009

Making the leap from virtualization to cloud computing: A roadmap and guide

Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Get a free copy of Cloud for Dummies courtesy of Hewlett-Packard at

his latest BriefingsDirect podcast discussion focuses on enterprise IT architects making a leap from virtualization to cloud computing.

How should IT leaders scale virtualized environments so that they can be managed for elasticity payoffs? What should be taking place in virtualized environments now to get them ready for cloud efficiencies and capabilities later?

And how do service-oriented architecture (SOA), governance, and adaptive infrastructure approaches relate to this progression, or road map, from tactical virtualization to powerful and strategic cloud computing outcomes?

Here to help hammer out a typical road map for how to move from virtualization-enabled server, storage, and network utilization benefits to the larger class of cloud computing agility and efficiency values, we are joined by two thought leaders from HP: Rebecca Lawson, director of Worldwide Cloud Marketing, and Bob Meyer, the worldwide virtualization lead in HP’s Technology Solutions Group.

The discussion is moderated by me, BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Lawson: We're seeing an acceleration of our customers to start to get their infrastructure in order -- to get it virtualized, standardized, and automated -- because they want to make the leap from being a technology provider to a service provider.

Many of our customers who are running an IT shop, whether it’s enterprise or small and mid-size, are starting to realize -- thanks to the cloud -- that they have to be service-centric in their orientation. That means they ultimately have to get to a place, where not only is their infrastructure available as a service, but all of their applications and their offerings are going in that direction as well.

Meyer: A couple of years ago, people were talking about virtualization. The focus was all on the server and hypervisor. The real positive trend now is to focus on the service.

How do I take this infrastructure, my servers, my storage, and my network and make sure that the plumbing is right and the connectivity is right between them to be agile enough to support the business? How do I manage this in a holistic manner, so that I don’t have multiple management tools or disconnected pools of data.

What’s really positive is that the top-down service perspective that says virtualization is great, but the end point is the service. On top of that virtualization, what do I need to do to take it to the next level? And, for many people now, that next level they are looking at is the cloud, because that is the services perspective.

Lawson: A lot of people are trying to make a link between virtualization and cloud computing. We think there is a link, but it’s not just a straight-line progression. In cloud computing, everything is delivered as a service.

What's really useful about cloud services like those is that they're not necessarily used inside the enterprise, but what they are doing is they are causing IT to focus on the end-game. Very specifically, what are those business services that we need to have and that business owners need to use in order to move our company forward?

... We're learning lesson from the big cloud service providers on how to standardize, where to standardize, how to automate, how to virtualize, and we're using the lessons that we are seeing from the big-cloud service providers and apply them back into the enterprise IT shop.

Meyer: The cloud discussion is important, because it looks at the way that you consume and deliver services. It really does have broader implications to say that now as a service provider to the business, you have options.

Your option is not just that you buy all the infrastructure components. You plumb them together, monitor them, manage them, make sure they're compliant, and deliver them. It really opens up the conversation to ask, "What’s the most efficient way to deliver the mix of services I have?"

The end result really is that there will be some that you build, manage, and manage the compliance on your own in the traditional way. Some of them might be outsourced to manage service providers. For some, you might source the infrastructure or the applications from the third-party provider.

... Then you start to understand the implications of shifting workloads, not losing specialty tools, and really getting to a point when you standardize. You could start to get to the point of managing a single infrastructure, understanding the costs better, and really be more effective at servicing and provisioning that. Standardizing has to happen in order to get there.

I'm not just talking about the server and hypervisor itself. You have to really look across your infrastructure, at the network, server, and storage, and get to that level of convergence. How do I get those things to work together when I have to provision a new service or provide a service?

... You're looking to source something for a service or you're looking to pull assets together. Everybody will have some combination of physical and virtual infrastructure. So how do I take action when I need a compute resource, be it physical or virtual?

Automation makes the transition possible

How do I know what’s available? How do I know how to provision it? How do I know to de-provision it? How do I see it if that’s in compliance?" All those things really only come through automation. From a bottom-up perspective, we look at the converged infrastructure, the automation capabilities, and the ability to standardize across that.

... When it’s gone beyond a server and hypervisor approach, and they've looked at the bigger picture, where the costs are actually being saved and pushed -- then the light goes on, and they say, "Okay, there is more to it than just virtualization and the server." You really do have to look, from an infrastructure perspective, at how you manage it, using holistic management, and how you connect them together.

Hopefully, at HP we can help make that progression faster, because we’ve worked with so many companies through this progression. But really it takes moving beyond the hypervisor approach, understanding what it needs to do in the context of the service, and then looking at the bigger picture.

Lawson: ... Most IT organizations want to be aware and help govern what actually gets consumed. That’s hard to do, because it’s easy to have rogue activity going on. It’s easy to have app developers, testers, or even business people go out and just start using cloud services.

... [But] if IT is willing and able to step back and provide a catalog of all services that the business can access, that might include some cloud services. We try to encourage our customers to use the tools, techniques, and the approach that says, "Let’s embrace all these different kinds of services, understand what they are, and help our lines of business and our constituents make the right choice, so that they're using services that are secure, governed, that perform to their expectations, and that don’t get them into trouble."

We encourage our customers to start immediately working on a service catalog. Because when you have a service catalog, you're forced into the right cultural and political behaviors that allow IT and lines of business to kind of sync up, because you sync up around what’s in the catalog.

There's no excuse not to do that these days, because the tools and technologies exist to allow you to do that. At HP, we’ve been doing that for many years. It’s not really brand new stuff. It’s new to a lot of organization that haven’t used it.

You can start to control, manage, and measure across that hybrid ecosystem with standard IT management tools. ... The organizing principle is the technology-enabled service. Then you can be consistent. You can say, "This external email service that we're using is really performing well. Maybe we should look at some other productivity services from that same vendor." You can start to make good decisions based on quantitative information about performance availability and security.
Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Get a free copy of Cloud for Dummies courtesy of Hewlett-Packard at

Oracle's Fusion Apps finally come out from behind the OpenWorld curtain

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

Like almost every attendee at just-concluded Oracle OpenWorld, the suspense on when Oracle would finally lift the wraps on Fusion Apps was palpable. Staying cool with minimizing our carbon footprint, we weren’t physically at Moscone, but instead watching the webcasts and monitoring the Twitter stream from our home office.

The level of anticipation over Fusion apps was palpable. But it was hardly suspense as it seemed that a good cross-section of Twitterati were either analysts, reference customers, consultants or other business partners who have had their NDA sneak peaks (we had ours back in June), but had to keep our lips sealed until last night.

There was also plenty of impatience for Oracle to finally get on with a message that was being drowned out by its sudden obsession with hardware. Ellison spent most of his keynote time pumping up its Exadata cache memory database storage appliance and issuing a $10 million challenge to IBM that it can’t match Oracle’s database benchmarks on Sun.

Yup, if the Sun acquisition goes trough, Oracle’s no longer strictly a software company, and although the Twiterati counted its share of big iron groupies, the predominant mood was that hardware was a distraction.

“This conference has been hardware heavy from the start. Odd for a software conference,” tweeted Forrester analyst Paul Hamerman. “90 minutes into the keynote, nothing yet on Fusion apps.”

“Larry clearly stalling with all this compression mumbo jumbo,” “Larry please hurry up and tell the world about Fusion Apps, fed up of saying YES it does exist to your skeptics,” and so on read the Twitter stream.

There was fear that Oracle would simply tease us in a manner akin to Jon Stewart’s we’ll have to leave it there dig at CNN: “I am afraid that Larry soon will tell that as time has run out he will tell about Fusion applications in next OOW.” A 20-minute rousing speech from Calif. Gov. Arnold Schwarzenegger served as a welcome relief from Ellison’s newly found affection for big iron toys.

Ellison came back after the Governator pleaded with the audience to stick around awhile and drop some change around California as the state is broke. The break gave him the chance to drift over to Oracle Enterprise Manager, which at least got the conversation off hardware.

Ellison described some evolutionary enhancements where Oracle can track your configurations trough Enterprise Manager and automatically manage patching. As we’ve noted previously, Oracle has compelling solutions for all-Oracle environments, among them being a declarative framework for developing apps and specifying what to monitor and auto-patch.

The main topic

But the spiel on Enterprise Manager provided a useful back door to the main topic, as Ellison showed how it could automate management of the next generation of Oracle apps. Ellison got the audience’s attention with the words, “We are code complete for all of this.”

Well almost everything. Oracle has completed work on all modules except manufacturing.

Ellison then gave a demo that was quite similar to one that we saw under NDA back in the summer. While ERP emerged with and was designed for client/server architectures, Fusion has emerged with a full Java EE and SOA architecture; it is built around Oracle Fusion middleware 11g and uses Oracle BPEL Process Manager to run processes as orchestrations of processes exposed from the Fusion Apps or other legacy applications. That makes the architecture of Fusion Apps clean and flexible.

But at this point, Oracle is not being any more specific about rollout other than to say it would happen sometime next year.

It uses SOA to loosely couple, rather than tightly integrate with other Fusion processes or processes exposed by existing back end applications, which should make Fusion apps more pliant and less prone to outage.

That allows workflows in Fusion to be dynamic and flexible. If an order in the supply chain is held up, the process can be dynamically changed without bringing down order fulfillment processes for orders that are working correctly. It also allows Oracle to embed business intelligence throughout the suite, so that you don’t have to leave the application to perform analytics.

For instance, in an HR process used for locating the right person for a job, you can dig up an employee’s salary history, and instead switching to a separate dashboard, you can instead retrieve and display relevant pieces of information necessary to see comparisons and make a decision.

Fusion’s SOA architecture also allows Oracle to abstract security and access control by relying on its separate, Fusion middleware-based Identity Manager product. The same goes with communications, where instant messaging systems can be pulled in (we didn’t see any integration with Wikis or other Web 2.0 social computing mechanisms, but we assume that they can be integrated as services.). It also applies to user interfaces, where you can use different rich internet clients by taking advantage of Oracle’s ADF framework in JDeveloper.

Oracle concedes the obvious: Outside of the mid-market, there is no greenfield market for ERP, and therefore, Fusion Apps are intended to supplement what you already have, not necessarily replace it.

That includes Oracle’s existing applications, for which it currently promises at least a decade of more support. But at this point, Oracle is not being any more specific about rollouts other than to say it would happen "sometime next year."

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Wednesday, October 14, 2009

CEO interview: Workday’s Aneel Bhusri on advancing SaaS and cloud models for improved ERP

Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download a copy. Learn more. Sponsor: Workday.

he latest BriefingsDirect podcast is an executive interview with a software-as-a-service (SaaS) upstart Workday, a human capital management (HCM), financial management, payroll, worker spend management, and workday benefits network provider.

I had the pleasure to recently sit down with Workday’s co-founder and co-CEO, Aneel Bhusri, who is responsible for the company’s overall strategy and day-to-day operations.

Bhusri, who also helped bring PeopleSoft to huge success, explains how Workday is raising the bar on employee life-cycle productivity by lowering IT costs through the SaaS model for full enterprise resource planning (ERP).

More than that, Workday is also demonstrating what I consider a roadmap to the future advantages in cloud computing. The interview is conducted by me, BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bhusri: We're very similar to PeopleSoft in some areas, and in other areas, quite different. We have the same culture -- focused on employees first and customers second. We focus on integrity. We focus on innovation. We brought that same culture to Workday, and our customers are very happy.

The pedigree of the team starts with my co-founder, Dave Duffield. He's an icon in the software industry. He's known for high integrity, innovation, and customer service. Many of us, like me, have been with him for 17 years now and we share that vision and that culture with him. We have set out to build the next great software company.

Much like PeopleSoft, we are taking advantage of a technology shift. PeopleSoft benefited from the shift from mainframe to client-server. When Workday started, people weren’t as focused on how big the shift was from client-server or on-premise computing to what is now called cloud computing or, back then, SaaS.

It now seems like it's even bigger than the shift from mainframe to client-server. This is a massive shift and you see it all across. That's the big difference. We are obviously leveraging a very different technology base.

The thing that Dave and I both took away from PeopleSoft is that you have to stay on top of innovation, and that's what Workday is doing. We are innovating where the large ERP vendors have stopped.

One of the reasons why the margins are so high for the [legacy ERP vendors] is that they are at the tail end of the technology life cycle. They are not really innovating.

... One of the reasons why the margins are so high for the [legacy ERP vendors] is that they are at the tail end of the technology life cycle. They are not really innovating. They are collecting maintenance payments. We all know that maintenance is very, very profitable. Well, when you start in a new technology, it's mostly investing. Usually, when the profitability rates get that high, it means that there is a new technology around the corner that will start cutting into those profitability rates.

... ERP is now 15 years old and just needs to be rewritten. The world has changed so dramatically since the original ERPs were written.

Back then, companies were thinking about being global. Now, they are global. People were not even thinking about the Internet, and now the Internet exists. That was before Sarbanes-Oxley and before the emergence of the iPhone and BlackBerry. All these things pile together to say that it's time to go back and rewrite core ERP. It's no longer valid in today’s world.

... These last nine months have been challenging for everyone. We, as a system-of-record vendor, saw fewer projects out there. At the same time, because of our new model and the cost benefits of the SaaS solutions, we were probably more relevant than we might have been without the economic downturn.

... As the Workday system has gotten more robust, we've really focused on the Fortune 1000 companies, our biggest being Flextronics. Those large, complex organizations with global requirements have a great opportunity for cost savings.

When you add it altogether . . . it averages out consistently to about a 50 percent cost saving over a five-year period.

We had companies that were planning on implementing the traditional legacy systems, but could not afford it. A great example is Sony Pictures Entertainment. They already own the licenses to the SAP HR system, and yet, after careful consideration, determined they didn't have the budget to implement it.

... They will be live in five months, and they will get the benefit of about a 50 percent cost savings, if not more. They basically quoted it as one-half the time at one-third the cost.

... When you add it altogether, really do it on an apples-to-apples basis, and look at what we have taken over for the customers, it averages out consistently to about a 50 percent cost saving over a five-year period.

... The data we have now is not theoretical. It's now based on 60 of our 100-plus customers. Being in production, we have been able to go back and monitor it. The good news about our cost is that it's all-in-one subscription cost, so we know exactly what the costs were for running the Workday system.

... [Many customers] decided that they were not going to take the major upgrade from one of those ERP vendors. A major upgrade is much like a new implementation and it's cost prohibitive.

With our focus on continuing innovation, they are not stuck in time. Every customer gets upgraded every four months to the most current version of the system. So as we are innovating, they are all taking the advantage of that innovation, whether it's in usability, functionality, or a new business model.

I like to think about it as building at web speed, and that's how Google, Amazon, and eBay think about it. New features come out very quickly. There are no old versions of Amazon and eBay that they have to worry about supporting. It's one system for all users. We're able to leverage those same principles that they are and bring out capabilities very quickly, so a customer can identify something that's important to them.

If you can get your administrative applications, your non-mission critical applications . . . delivered from a vendor . . . why not focus your resources on the core enterprise apps you have?

... I think we are a lot like Salesforce. Dave and I have a very good relationship with Marc Benioff. They're focused on CRM, and we're focused on ERP. I think the big difference is that they are focused on becoming a platform vendor, and we are really very focused on staying as an application vendor.

... If you can get your administrative applications, your non-mission critical applications -- CRM, HR, payroll, and accounting -- delivered from a vendor, and you can manage them to service-level agreements (SLAs), why not focus your resources on the core enterprise apps you have?

More and more CIOs are getting that. It does free up data-center space. It also frees up human resources and IT to focus in on what's core to their business. HR and accounting don't have to be specialized in running that system. They have to know HR and accounting, but they don't have to be specialized in running those systems.
Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download a copy. Learn more. Sponsor: Workday.