Wednesday, January 28, 2009

Visibility and control over API use is crucial as enterprises ramp to SaaS and cloud models

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Sonoa Systems.

Read a full transcript of the discussion.

As established enterprise IT expectations meet up with cutting-edge cloud delivery models, there's a clear need for additional trust and maturity in order for enterprises to further adopt cloud-based services. Enterprise IT expectations on visibility, control, and security to software as a service (SaaS), and cloud-based applications delivery need tools that manage the applications use and the use patterns for providers.

This podcast examines how one SaaS provider, Innotas, has developed a more matured view into services operations and application programming interfaces (APIs) and how they can extend the benefits from that visibility to their customers. We'll hear how Innotas, an on-demand project portfolio management (PPM) service, derives more analytics from network activity and thereby provides mounting confidence in how services are performing.

To better understand how Innotas has better managed services based on service level agreements (SLAs) monitoring, I recently interviewed Tim Madewell, vice president of operations at Innotas, as well as Chet Kapoor, CEO of Sonoa Systems.

Here are some excerpts:
Innotas is an on-demand PPM solution. We focus on IT organizations and provide software access via a standard Web browser for managing projects, as well as non-project work within an IT department. ... One of our differentiators was that being on-demand and multi-tenant from day one enabled us to be one of the early adopters in the SaaS world and in subscription-based software.

We have seen how the attitude around SaaS has matured and evolved. SaaS has become more standard and available, and as the technology has matured, especially around security, the acceptance level for SaaS has improved. One of the things that benefit us is in focusing on IT. Typically this type of change in acceptance for software starts within the IT organization itself.

To be a business application in a SaaS model today means that you have to step up and be enterprise class. We look at ourselves as an extension of all of our customers' internal IT and operations groups and we need to live up to those same standards. ... Once we get past the initial security challenges, folks are very interested and concerned about reliability and performance.

When [applications were] traditionally inside your four walls, there was a greater sense of control. As soon as you step into the cloud or with any SaaS provider, some of the benefits and the value proposition is that they control it, they manage it for you, but you're giving up some control. Building that confidence and acceptance into the solution is important, and ties back to being enterprise class.

Sonoa helped me identify problems or potential problems earlier. When I turned up the ServiceNet product it decoupled the traffic from my Web users, my end-users, the traditional users from my back end, and from my API.

That visibility gave me some input into when my servers were getting hot or heating up. I was seeing a lot of activity and started to differentiate if this activity was generated through the front end or through the back end.

So, my immediate return was to give my operations team a solution and a tool that gives them better visibility and then to control some of that traffic on the back-end. ... With this visibility I'm able to put in some controls that will give me the ability to look at how I make more and better use of the capacity that I have today.

You always start by wanting to see the needle, because you can’t move the needle, if you don’t see it. ... I want to know who is using my service, what are they using it for, how long are they using it, things like that. You have to have visibility into the services you provide.

The next thing you say is, "Okay, now that I have visibility, I want to start putting in some security access control." ... And you want to start by saying, "I want to give priority access to priority customers." ... And, they want it to be available at a scale where all their customers are getting it.

We've been working with companies like Innotas to get them through this evolution. Some customers choose to get our technology in the form of appliances. Some of them do it in the form of software, as Tim has. And, some of our customers are choosing to get our technology right in the cloud itself where they do not have any data-center whatsoever.

The easier we can make it for enterprises to access the information for their composite applications through APIs, the more successful companies like Innotas are, and there is more adoption. IT and enterprises end up saving money.

We're very familiar with the different user types in an application. You may have view-only users, standard users, or power users. We can take the same view on the back end with Web-services. There are certainly different levels of users or different levels of service you could provide for users, depending on their needs. ... Now, I've got the ability to take a look at offering some tiered services or tailoring my back-end user type and then tying that to my revenue model.

[Enterprise] customers will write applications or custom applications, where they probably want to use Oracle or SAP inside the firewall and maybe have another custom application of some sort, Innotas or Salesforce.com or whatever -- outside. They want to write a composite application, a mashup, or whatever you decide to call it, and they want all these different services.

A critical need that we find is that customers start to get nervous. It's not so much with the Innotases of the world, because they are fairly secure. They run like an enterprise application, but it’s available in the cloud. It happens when you start using things like Amazon Elastic Compute Cloud (EC2), and people are starting to put custom applications there. ... They probably do it in a very hybrid model because I don’t think on-premise computing is going away.

What we’re finding is there is a need for a way to govern what goes on outside the enterprise. Govern could be a fairly heavy word, so let me be more specific. You want to have visibility into, how many accounts I have at EC2, for example. ... They want to have some visibility into what is happening with the cloud. Then, as they get more visibility, they want to see if they are paying extra for SLAs and how the SLAs are being mapped.

The second aspect of this is that it's probably a new revenue stream for Web 2.0 and SaaS companies, as well as enterprises. They've maximized or have worked very hard on their channels, whether user access or a browser-based channel. Now, they have an opportunity to go after a different set of folks who are trying to not just go off and use Innotas through a browser or Salesforce.com through a browser.

If you really think about the person who is doing a mash up, every consumer is probably going to be a provider at some point, and every provider is going to be a consumer at some point. ... [We] have been working on taking what Sonoa provides with a ServiceNet product, and making it available as a service. We have some customers that are already going in production. It's something that we will start talking about in the very near future.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Sonoa Systems.

Monday, January 26, 2009

BriefingsDirect analysts discuss Service Oriented Communications, debate how dead SOA really is

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 36, a periodic discussion and dissection of software, services, Service Oriented Architecture (SOA) and compute cloud-related news and events with a panel of IT analysts.

In this episode, recorded Jan. 12, 2009, our guests examine what might keep SOA alive and vibrant -- the ability for the architectural approach to grow inclusive of service types like service-oriented communications (SOC).

We also visit the purported demise of large-scale SOA to calibrate the life span of SOA -- is it dead or alive?

Please join noted IT industry analysts and experts Todd Landry, vice president of NEC Sphere; Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Joe McKendrick, independent analyst and prolific blogger; Dave Linthicum, founder of Blue Mountain Labs; JP Morgenthal, senior analyst at Burton Group, and Anne Thomas Manes, vice president and research director at Burton Group.

Our discussion is hosted and moderated by BriefingDirect's Dana Gardner.

Here are some excerpts:
Taking pulse of SOA ...

Manes: Certainly, lots of people have refuted my claim [that large-scale SOA is dead]. At the same time, I've had at least as many people, and probably more, I am dead-on right. My goal with the blog post was to at least get the conversation going, and I think I managed to do that effectively.

I still believe that if you go before a funding board this year -- if you are an IT group and you are trying to get funding for some projects -- and you go forward with a proposal that says we need to do SOA, because SOA is good, it’s going to get shot down. Instead, what you have to go forward with is very specific value-add projects that say we need to do this, we need to do that, and we need to do that.

You need to talk about what services you're going to provide. In the example of communications services, there's a really strong value proposition associated with creating communications services. Likewise, going forward with a request to say, "We need to build a billing service which replaces the 27 different billing capabilities that we have in each of our product applications out there."

That’s a very strong, financially rich, good ROI type of proposal that’s going to win. But, it's not going to work, if you go forward and just say, "Oh, we need to go get an ESB. We need to go get some registry and repository technologies. We need to invest in all the SOA infrastructure. We need to do SOA just because SOA is what everybody is telling me we need to do."

Just talk about the services and talk about the practices that are going to help improve the architecture of your systems. Talk about doing application rationalization and talk about reducing the redundancy within your environment.

Talk about dismantling the 47 data warehouses you have that contain customer information and create a set of data services instead that actually gives you a richer, cleaner and more complete information about your customers. Those are things that are going to win.

... One of my favorite comments that came back from the blog post were the number of people who said, "Basically, we just really suck at doing architecture."

One of the primary reasons that a lot of SOA initiatives are failing is because people don’t actually do the architecture. Instead, what they do is service-oriented integration, as opposed to SOA. If you're truly doing architecture, then you're doing an analysis of your applications architecture, figuring out why you have so much extra garbage in your environment, and figuring out what you should actually start to get rid of.

... The folks who have a little more architectural maturity recognize the value of taking this opportunity, when lots and lots of projects are no longer going forward. They can say, "Well, now is a great time for us to start focusing on architecture and figure out how we can position ourselves to take advantage of the economy, when it does finally turn around."

Baer: I think what Anne is saying right now is that organizations that did get ahead of the curve with SOA, that thoughtfully began the architecture process and rationalized it, will go ahead, because there will be real economies at some point compared to traditional application development.

McKendrick: I've always said that the companies that have gravitated toward SOA are the companies that will probably do well anyway. Those are the companies with more visionary management and more tightly integrated approaches to business. Those are the companies that we've seen in all the case studies that over recent years that have gravitated toward SOA. Let’s face it, if they didn’t have SOA, they probably would have been doing okay anyway, because they're well-managed companies.

The companies that really could have used SOA, the companies not likely to be adopting SOA, or not likely be looking at SOA, as Anne and Tony discussed, those are the hunker-down companies, the companies that have fairly unsatisfactory architectures or no architectural approach.

Linthicum: There are companies out there that have some very good IT talent, and they can take SOA, WOA, or cloud computing, look at the business problems, make some very nice systems, and automate the business nicely.

However, the majority of people out there who are wrestling around with architecture are ill-equipped to solve some of the issues. They have a tendency to focus in wrong areas. Anne hit this in her blog as well. It was brilliant.

In the area of, "Let’s do quick tactical things, and look at this as a big systemic issue we are looking to solve," it just becomes too big, too complex. They try to solve it with things that are too tactical and just don’t have enough value. There are no free lunches with SOA or any kind of an architectural approach or any thing we have to improve the business.

You're going to have to break things down to their functional primitive and build them up again. You're going to have to think long and hard about how your architecture relates and links back to the business and how that’s going to work.

I wish there were something you could buy in a box or something you could download or some cloud you can connect to, but at the end of the day it’s the talent of the people who are doing the job. That’s where people have been falling down. Over and over again, in the last three years, we have identified this. I don’t think anybody has taken steps to improve it. In fact, I think it’s gotten worse.

Kobielus: We all know the real-world implementation problems with SOA, the way it’s been developed and presented and discussed in the industry. The core of it is services. As Anne indicated, services are the unit of governance that SOA begot.

We all now focus on services. Now, we’re moving into the world of cloud computing and you know what? A nebulous environment has gotten even more nebulous. The issues with governance, services, and the cloud -- everything is a service in the cloud. So, how do you govern everything? How do you federate public and private clouds? How do you control mashups and so forth? How do you deal with the issues like virtual machine sprawl?

The range of issues now being thrown into the big SOA hopper under the cloud paradigm is just growing, and the fatigue is going to grow, and the disillusionment is going to grow with the very concept of SOA.

Manes: My core recommendation is to think big and take small steps.

You need to do the planning, and your architecture team should be able to do that, without having to go get permission from your funding organization to do planning, because that’s what they’re supposed to be doing. But then, they have to identify quick, short, tactical projects that will actually deliver value.

That’s what they should do and are designed to do to improve the architecture as a whole. It can't be just, "Oh I have to integrate this system with that system." They really should be focusing on identifying projects that will, in fact, improve the architecture. In that way, you’ll be in a better position when things are over.

How service oriented communications has evolved ...


Landry: ... On any given day in a business, do people care about doing the mashup or do they care about having their business be more effective, especially in these times? We believe that people will continue to look for more efficiency in their IT infrastructure. They'll continue to look for how people can be more connected, not only internally but with their customers. At the end of the day, you're right. It’s really about how people get more interconnected with the business process.

... If you look at any implementation and then what happens in the business, the real connective tissue between all of these includes people. The decisions and actions that take place in a business on a day-to-day basis are highly dependent on these people being effective.

Therefore, the manner in which we can help them with their communications and help them collaborate becomes a critical factor in how the workflows can be more effective and more efficient. We've looked at that and said the more you can make communications into business applications, the more you can make communications a more natural part of an SOA.

... [We] had to communicate to the industry the concept of how communications integrates into frameworks in the IT infrastructure. SOA is a one term still used out there to define an approach. When we built our communications platform, we opened up all its services in a manner that we believe fit very naturally into the concept of a SOA. Therefore, our communications platform is really more service oriented than it is a closed and proprietary traditional PBX-oriented system.

... The idea of being able to click-to-call has been around for quite some time. With the more recent technologies mashing up the directory listings, mashing up a call function inside of a business application, is much more achievable and can be done much easier manner than it has in the past.

Baer: The idea of being able to manage and integrate spoken communications may actually be a critical gap in compliance strategy. I could see that as being an incredible justification for trying to integrate voice communications. Another instance would be with any type of real-time supply chain or with trading.

Kobielus: I see SOC as very much an important extension of SOA or an application of SOA, where the service that you're trying to maximize, share, and use is the intelligence that’s in people’s heads -- the people in the organization, in your team. You have various ways in which you can get access to that knowledge and intelligence, one of which is by tapping into a common social networking environment.

In a consumer sphere, the thing is the intelligence you want to gain access to is intelligence sets residing in mobile assets -- human beings on the run. Human beings have various devices and applications through which they can get access to all manner of content and through which they can get access to each other.

So, in a consumer world, a lot of the SOC value proposition is in how it supports social networking. The Facebook environments provide an ever more service-oriented environment within which people can mash up not only their presence and profiles, but all of the content the human beings generate on the fly. Possibly, they can tag on the fly as well, and that might be relevant to other people.

There is a strong potential for SOC and that consumer Facebook-style paradigm of sharing everybody’s user-generated content that’s developed on the fly.

Linthicum: ... The fact of the matter is that people are just getting their arms around exactly what a service is and how you take multiple services and turn them in solutions. ... If you're going to take services like this, expose them as services, and make easier use of them ... then you have to create the integration yourself through very disparate mechanisms and things like that. People are always struggling, trying to figure how to aggregate this [SOC] stuff and its solutions.

Morgenthal: I'd been working with a number of companies who had warehouse issues, and we were basically normalizing those issues by instituting a new services architecture and layering that on top of that legacy system, so they could build their business processes.

One of the biggest issue was they were still communicating exceptions that were happening in the warehouse because device limitations were scanners and text in a very noisy environment. Everyone agreed that the best communications tool in that environment was their cell phone because it vibrated. Well, the Blackberry now has vibration too. So, that’s also a valid form of communication.

If you tie this as a unified communications strategy to the business process, it’s very effective and not only is it very effective ... We expect things in microseconds. So it's enhancing the expectations of people in general because of that. But still, I think overall productivity goes up tremendously, and we move much more effectively toward a real-time event architecture across communications and systems and people. It’s really fascinating to watch and it’s very effective.

Manes: When we're talking about communications services, you want to make sure that those services are very easy to access. With communications services, when you start looking inside PBXs, voice over IP, and those kinds of things, that’s arcane and completely out of the realm of normal development skills that you would get in a Web developer.

Now, we do have some nice capabilities like click to call, and those are set up as drop-in components that people can now use inside their Web applications. Wouldn’t it be nice, if we actually had a much more powerful communications service that a developer can use to communicate with a customer, communicate with a shop manager, or communicate with whatever at this point in the application?

They can call out to a communications service and specify, "Here is who I want to talk to. Here is the information I want to send. And, here is the method through which I want send it." And, and then they can have the communications service completely take care of the whole processes associated with making that work.

I can guarantee that a developer is going to choose that over, "Oh, I have to write all kinds of arcane code in order to figure out how to send an email or how to launch a phone call." So, building these services that simplify a very complex process is extremely valuable from a productivity perspective.

Landry: ... There's is another piece of this that says these platforms are bringing together multiple forms of media, so that you can utilize text messaging, audio, or video communications. You can do screen-sharing data collaboration in a simpler and more consistent fashion and you can utilize one set of services to do that.

Whether they're deployed as a cloud and the enterprise is using those services from within a cloud or whether they've made the decision to do them on premises, both are very viable and, in many cases, both are being done today.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Thursday, January 22, 2009

Case study: IT repositories help Wachovia manage change amid complex bank consolidation

Disclaimer: The views expressed in the following are not necessarily those of Wells Fargo & Co. or any of its subsidiaries or affiliates.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read a full transcript of the case study discussion.

When large businesses need to change fast, the IT systems need to do more than keep up with change -- they need to manage, define and secure it. IT repositories are now effectively orchestrating multiple enterprise systems of record that must quickly operate together as result of massive mergers and acquisitions.

Using such repositories, or groups of repositories, IT and business assets can be quickly federated and integrated for business process alignment and consolidation. Furthermore, these processes can be managed centrally via policies and governance definitions, even across far-flung global operations.

To better understand the value and opportunity in using IT repositories to manage change in complex business environments, I recently discussed a case study at Wachovia, which is now merging with Wells Fargo. To help understand the role of repositories amid this merger, I spoke with Harry Karr and Hemesh Yadav, both IT architects at Wachovia.

Here are some excerpts:
A repository solution has more than one physical repository, and each one has certain specific information or a slice of the data. All together, it gives us a good enterprise solution for a repository and gives us a picture of what we have.

We have more distributed systems now. We have services being offered by a half dozen or a dozen different service containers. We have many different clients hitting those services. We have many more pieces to the puzzle than we had before, and they're all owned by different people, different groups, and different teams.

Keeping up with that is much harder than it used to be with a single monolithic type of application, as in knowing where the touch points are, what the integration needs are, and where the security mechanisms are applied. There are a lot of things you have to know between the applications.

If something isn't written down, you've lost it. It's not going to be there. What we need to do is make sure that we have a record of what's there, so that anybody in the bank can go back and look and say, "We have this at this point, and these are the touch points involved, this is the security, and these are the access requirements." Anything they need to know about those touch points can be known from that repository solution.

The hardest part is keeping track of what we have, especially in times of mergers and acquisitions, but also at any other time. When we are trying to add new functionality, the first thing you have to know is what you have in place. So, keeping that up to date, knowing what we have is probably the biggest challenge.

There's no value at all in putting information in a repository. The value is when we get the information out, and in order to get it out, you have to be able to query it. Having it in with a consistent taxonomy and consistent metadata is the only way that you can get the information back out again.

In production and troubleshooting ... you need to know what changes have happened. What's going on with that application? What's changed since the last time it was running properly? Without all that tie-in from all those different repositories, you lose track of what you have, and it helps every single lifecycle. ... Testing needs to match the business requirements. If those requirements are not in a repository, are they being handed over on a notebook somewhere? Where do they exist? Repository helps a great deal there.

It's important to look at the whole picture. They need to look at what's important between all the different repositories. You need to have some way of storing your business-process model. That includes business rules, services, information about your systems of record, information about the data, contracts, who's using what, requirements for change management, SLA management, problem management, organizational structure, and process flows.

All those different repositories need to have touch points. Mapping that out ahead of time will give you an idea of what to do with any one of those, as you put each one in place.

[The repository solution] is going to have a lot of benefits. If you can make the business case for governance of any sort, then the repository goes hand in hand with that governance -- being able to track what you are doing, your processes, everything involved. The repository is a key piece of the governance. I don't think that anybody would disagree that governance has a great business case behind it, and the repository is part of that governance model.

Everybody talks about alignment between IT and business. The repository is the key piece of that. In order to have some kind of alignment, you have to have visibility, and the repository gives you that visibility.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Disclaimer: The views expressed in the podcast are not necessarily those of Wells Fargo & Co. or any of its subsidiaries or affiliates.

Wednesday, January 21, 2009

Services consumers and developers must now mount pressure for cloud computing neutrality

Sure, most people instantly get the need for network neutrality, but what about cloud neutrality?

Just like we'd be loathe to tolerate any one (often the only available) Internet provider from qualitatively managing our traffic and packets use based on their singular business objectives, we should also be concerned about any cloud provider exerting too much influence or setting de facto standards early on that diminish the cloud services market as a whole.

Now, the Obama Administration has enough on its plate, so I'm not advocating any regulatory or commerce enforcement policies to define or advocate cloud neutrality. But I do think it's important to foster an open market and encourage early adopters -- especially developers and independent cloud services providers -- to vote mindfully with their participation (and dollars) to establish and nurture broad openness and interoperability practices among the burgeoning cloud entities on the Internet.

If an open Internet has been good for sustained productivity and innovation, which few refute, why wouldn't cloud services also benefit from an open market environment -- at least through a formative stage (or two)? Wouldn't what's good for the popularity of the pipes also be good for promoting the widest consumption of the water?

Let's still favor the advance of general productivity on the Internet over more narrow commercial interests, even as we enter the cloud services phase of the Internet, eh?

Shouldn't a network infrastructure often described as "public" -- hence the common icon of the Internet as a puffy cloud -- become the substrate for an intensely fertile marketplace, and just not a handy subsidy for any number of, albeit competing, roach motels? The best form of competition comes from the hotels competing but also with low barriers of entry and exit over a long period of time. Choice is essential, not just among vendors but on how those vendors behave as a group.

The cloud services marketplace is not just a new Monopoly board in the sky, it's still the product of the World Wide Web. If you have to go through the cloud to reach the services, then the services themselves are a product of the cloud, and not the other way around.

Things in the nascent cloud services ecology are moving rapidly, so now's the time to set the proper perspective on what works best for the buyers and users of cloud services, as well as the commercial interests setting up shop along the Information Superhighway. Remember that metaphor for the Internet? I think we should think of a cloud superhighway in the same way. Now it's more than information, but it's just as or more important to the public good. There's a public interest in seeing this succeed for the highway travelers, which include big businesses, as well as those few building the toll booths.

There are some dramatic recent developments that point to how rapidly cloud things are shaping up:
  • IBM's Lotus brand is bringing a lot of what we know as Notes/Domino services, a longtime enterprise groupware leader, to cloud-based delivery. Think of it as a big nest of .nsfs in the ether (and that's data up there, folks!).
  • Engine Yard's Vertebra has extended cloud neutrality into its Ruby and Rails development and deployment solutions. Write anywhere, run anywhere, change anywhere, integrate anywhere ... repeat.
  • Sun Microsystems buys Q-Layer. Let's hope that Sun gets cloud "open" from the start this time, unlike the 12-year Java will be open someday saga (and keep those license fees coming).
There have been warnings about a potential and troubling lack of choice in cloud options, notably from Richard Stallman. And there have been major movements by vendors not known for their allegiance to openness first and profits later, including Apple and Microsoft, into the cloud model.

So even though things are moving fast and at the most impactful levels of the global IT business, there's very little being said and done about preserving the neutrality of the Internet economy for the cloud economy. And I know it's hard to actually define neutrality. But like pornography, I know it when I see it.

Better yet, I know non-neutrality when I see it. We should all be on the lookout for non-neutrality in the cloud ecologies, and seek and reward alternatives. Blog about these distinctions. Look to the decades-old Internet example for guidance. It really worked and keeps working.

That does not mean in any way outlawing good old fashioned capitalism in the cloud ecosystem. It means making savvy choices that favor data portability, and recognizing that APIs that carry over from one hosting provider to another make for good market drivers that entice more consumers that can exercise more choice. The pie needs to grow first, and the market leaders can seek domination in some way later when the playing filed is established and perhaps somewhat level.

Enterprises and small to medium size business especially should advance their long term interests as they examine and adopt cloud-based services to make sure they are not trading short-term savings in a recession for long-term IT lock-in. Once you're in the roach motel, you can't get out. And they can raise the rent (maintenance fees) to just below your cost of exercising painful choice for a long time. You may be familiar with this IT supplier dynamic.

There is a better path, and we've seen it with the Web: A modest, market-driven level of mutually beneficial interoperability of services and applications, data portability in its deepest forms, SLAs that clearly spell out the terms of engagement and what is acceptable in terms of services and data ownership.

These cloud terms of engagement will be tough and complex. We're in some uncharted territory here. Can you own a business process even if the cloud provider owns the constituent services? Yes, I believe you can, and should. Get it in writing, though.

So more than any regulations or broad policy dictates on the best practices for cloud computing, we need good licenses and a clear and understood framework for cloud ecology best practices that protects the users and developers, as well as the providers. The goal is to make strong enticements for all the participants in the ecology, not just a few or in a grossly inequitable way. We'll need escape clauses, too, just in case.

Indeed, the value and probity of cloud use licenses must be weighed against the IT total cost equation, including the cost of switching and the costs of integration. That is, if I get cloud services cheap, how much will that cost me in the long run? And is this and does this become a better deal than the traditional on-premises, per processor or application licensing models?

In short, we need the ability to calculate the cost-benefit analysis of modern IT that includes the new cloud computing options. And therefore we need to know the true costs of cloud computing -- including how open it really is -- to proceed. The more open, the less risk, and so the more overall adoption based on an understood cost-benefit projection.

Let's look at cloud services as hugely promising, perhaps the best alternative for IT resources and support for a number of applications types and certain business use cases. But let's not get lulled into treating a cloud provider relationship any different from any other business deal. Let's get the terms down, and vote well as consumers. It's in the best interest of the vendors, too, they just can't do this without us. Literally.

Let's leverage the fact that the Internet has set a powerful and essential precedent that upholds and protects an online market's open development as fundamentally more important than any one company's ability to stake out a claim and horde all the gold dust. Open markets are the best way to allow the miners, prospectors, shovel sellers, and real estate interests to all grow and prosper. And openness will allow the cloud market to reach its full potential fast, through unfettered innovation from all quarters.

Like with the Web and Internet over the past 15 years, the power of choice and unfettered innovation and dynamism of sufficiently neutral cloud markets should be the best guide of how the cloud future shakes out productively. In this economy we really need a new and huge productivity boost from IT lest we all get pulled into the downward spiral.

Tuesday, January 20, 2009

Enterprises find easier ways to package and deliver applications and data to mobile devices

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Listen to related webinar. Sponsor: Kapow Technologies.

Read a full transcript of the discussion.

Bringing more enterprise data to the mobile tier has been a thorny problem for many years now. A logjam remains between developers and their ability to productively deliver enterprise applications and data to mobile devices, such as cell phones, PDAs, and smart phones like the Apple iPhone.

To develop applications that reach even a small number of major handset environments means big-time custom plumbing, from the various data sources, to the mixture of networks, to the choices on integration, to the various security needs, to the many user interfaces and mobile client operating systems. Managing all these variables requires a high degree of skill across many different skill sets. There are not many developers that fit this bill in your average enterprise.

But new and innovative ways are emerging to extract and make enterprise data ready to be accessed and consumed by mobile device users. Kapow Technologies, for example, is focusing on the Web browser on the mobile device to allow data to be much more efficiently delivered via mobile networks beyond the limited range of traditional enterprise applications.

To gain an in-depth look at how more enterprises and their data can be packaged and delivered effectively to more users, I recently spoke to JP Finnell, CEO of Mobility Partners, a wireless mobility consulting firm; Stefan Andreasen, founder and chief technology officer at Kapow Technologies, and Ron Yu, head of marketing at Kapow.

Here are some excerpts:
Unlike conventional applications, mobile applications have a huge number of choices to juggle. There are choices about input and output, touch-screen versus QWERTY. ... You also have the choice of the device platform. That's also quite different from your traditional choice of development options.

What we see within the enterprise is that the IT organization is really buried in the complexity of legacy systems. First and foremost, how do they get real-time access to information that's locked in 20- or 30-year-old systems.

On the other hand, there is a tremendous amount of data that's locked in homegrown applications through Internet portals and applications that have been adopted and developed through the years, either by the IT organization itself or through mergers and acquisitions. When you're trying to integrate all these heterogeneous data sources and applications, it's almost impossible to conceive how you would develop a mobile application.

What we see is the line-of-business knowledge worker putting a lot of pressure on IT. IT tries to respond to this, but dealing with the old traditional methods of technical requirements, business cases and things like that, just doesn't lend itself to quick, agile, iterative, perpetual-beta types of mobile application development.

The reason we're having this discussion today is because Kapow customers have actually brought us into this market. Because of how we have innovatively solved these real-time, heterogeneous, unstructured data challenges, customers have come up with their own ideas of how they can develop mobile apps in real time.

Why is the need for mobile application growing? It all started with the Internet and the easy access to applications through the Web browser. Then, we got laptops and we can actually access this application when we are on the road. The problem is the form factor of the laptop, opening it up at the airport, and getting on the 'Net is quite cumbersome.

So, to improve agility for mobile workers, they're better off taking their mobile out of their pocket and seeing it right there. That's what's creating the need. The data that people want to look at is really what they're already looking at on their laptop. They just want to move it to a new medium that's more agile, handier, and they can get access to wherever they are, rather than only in the airport or in the lobby of the hotel.

[The traditional mobile application development methods are] incomplete. The approaches of these large platform vendors -- and I am strategic partner in several of them -- aren't strong, when it comes to agility, prototyping, and being able to accommodate this real-time iterative application development approach. That's really where Kapow shines.

If you want a mobile application, if you want agility, you want it in the world of applications that you're already working with. If you're already opening your laptop and working with data, we give you that exact same experience on the mobile phone. ... It's about taking what you're already doing and doing it in a more agile and mobile way. That's what's very appealing. Business workers get their data and their applications their way on the mobile phone, and basically, it's making them more effective in what they're already doing.

What's unique with Kapow is that you can go then to the developers and say, “Hey, look at this. This is what I want on my mobile app -- on my mobile phone.” And, they can get the data from the world of the browser, turn it into standard application programming interfaces (APIs), and get it through any mobile devices.

Handsets today are getting more and more browser enabled. So, of course, if you have a browser-enabled phone, it's very easy to do this. You can write just in XHTML as you've mentioned. But, a lot of companies already have like a mobile infrastructure platform. Because our product turns the applications into standard APIs, standard feeds, it works with any mobile platform and can work in the devices that they support. You basically get the best of both worlds.

We recently had a webinar, and we asked what are the biggest challenges that people have. The number one challenge that came out of it was standard access to data, and that's exactly the problem we solve. We allow you to very, very quickly -- almost as quickly as it would take to browse an application once -- turn an application to standard API. Then, you can take it from there to your mobile phone or your mobile applications.

We have an integrated development environment (IDE) that basically allows the IT architects to service enable anything with a Web interface, whether it's a homepage or an application. The power of that really is to bring the knowledge worker or line of business manager together with the IT person to actually develop the business and technical requirements in real-time.

This helps perpetuate the beta development of mobile applications where you don't have to go through months and months of planning cycles, because we know that in a mobile world, once you've gone one or two or three months past, the business has changed.

Where are the mobile apps cropping up? Projects don't get funded unless there is a business case. The best business cases are those where there's a business process that's already been defined and that needs to be automated. Typically, those are field-based types of processes that we are seeing. So, I'd say, the field-force automation projects, utilities or direct sales agents, are the areas where I'm seeing the most investment today on a departmental level.

We see this is as enabling and empowering the IT organization to take control of their destiny today, as opposed to waiting for funding and cumbersome development and planning processes to be able to scope out a project and then to write code.

Perhaps we're not going to see mobile killer apps or killer mobile apps, but killer business processes that need to have a mobile element to them.

And there is something that I call "strategy emerging from experience." The best way to get adoption in your enterprise is to rapidly iterate at the departmental level, gain experience that way, create centralized governance or coordinative governance that captures the lessons from those, and then become more strategic.

What I am seeing in 2009 is a good experience space. Almost every enterprise today has at least one department that's doing something around mobile. One way to get that to be more strategic is to be more iterative with your approach.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Listen to related webinar. Sponsor: Kapow Technologies.

Monday, January 12, 2009

Workday builds out SaaS bellwether for human capital management services and costs controls

Responding to the need for agile compensation and incentives management in a tough global economy, Workday has delivered new versions of its innovative Human Capital Management (HCM) software-as-a-service (SAAS) solutions.

The new services offer richer costs and compensation management features, more business services like payroll, and also improved access to global process insights and analytics. These additions are designed to effectively and swiftly help guide employees through change and to improve business productivity and responsiveness.

Until now, talent management offerings have evolved as add-ons to legacy systems, creating new silos of information and an incomplete view of worker performance. Because Workday's on-demand business applications are built on a service oriented architecture (SOA), more coordinated services can be brought to the full human management equation.

Furthermore, by allowing for integration across the data from these services -- with centralized control even across global regions and disparate workforces -- a new element of business intelligence (BI) for human resources management becomes possible. [Disclosure: Workday is a sponsor of BriefingsDirect podcasts.]

I've not been alone in my viewing Workday as a poster child for where SaaS business applications are headed. The user interface, using Adobe technology, deep use of SOA infrastructure approaches, and philosophy that managing people well is core to almost any business process place Workday out in front of many business and IT trends.

But the new offerings also point up a burgeoning value of cloud computing. Easier but controlled access to centralized data provides the ability to apply analytics and advanced queries to more human resources and processes data. Better data in, better results out. This helps coordinate the managment of people more closely to the management of dynamic business goals. And it helps cut the lag between wanting to instill business change, and then finding the path to informing and incentivizing employees with less waste and confusion.

And, of course, secure access to employee trends data provides a two-way street: Derive insights through larger data sets analysis and BI, and also gain the abilty to hasten and promote business processes through the execution and enforcement of incentives and compensation management more fully and quickly.

Both of these values are essential in an economy rife with mergers and acquisitions, consolidation, workforce re-allocations, shifting customer requirements, new sales strategies and the need to be fleet in shifting incentives to align with dynamic market conditions.

Consider too that a SaaS approach to HCM improves the access to data sets that can align and automate the interplay between customer relationship management (CRM) insights (regardless of hosting models) and HCM change management. Isn't there a key relationship between what goes on with customers and what then need to go on with employees? Sure is in the sales department. Yet bringing intelligence, analysis and execution automation to these disparate functions has been manual, incomplete, difficult and murky.

That should soon change. Included in Update 6 from Workday, now in Pleasanton, Calif., are Pay for Performance and Worker Spend Management improvements.

Worker Spend Management means that spending activity is automatically tied to workers and can be linked to projects or activities via tags, called Worktags, so managers and business leaders have a complete view of total worker cost -- including both compensation and the resources used to get work done.

Previously, tying spending activity and behavior to individual positions, people, workgroups, teams and business purpose has been impossible without expensive analytic solutions or manual spreadsheets.

Pay for Performance features tie performance reviews, team performance and company performance to compensation -- providing managers with recommended targets based on a broad range of configurable variables and business results. Decision and assessment support includes target versus actual reporting and actionable analytics, enabling organizations and managers to achieve actual performance-based rewards.

Brnaching out into adding more business services to the HCM portfolio, Workday has also announced the general availability of Workday Payroll for the U.S. This offering delivers payroll processing coupled with the company's other solutions. And other payroll approaches --internal or via outsourced payroll providers -- will continue to be supported and integrated to the offerings, said Workday.

The Workday system, however, leverages a global calculation engine and payroll framework, allowing Workday to centrally localize payroll for regions and countries without the redevelopment efforts associated with traditional on-premise systems.

Also included in Workday Update 6 is a significant expansion of Benefits Network, a set of pre-packaged integrations with popular benefits carriers. The Benefits Network includes connections to 49 providers, with plans for 21 more in the next month.

Workday is an on-demand financial management and human capital management solutions vendor. It was founded by David Duffield, best known as the co-founder and former chairman of PeopleSoft, which grew to be the world’s second-largest application software company before being acquired by Oracle in 2005. Workday aquired Cape Clear Software in early 2008.

Whether you're implementing HCM solutions, I'd keep an eye on Workday's progress. They are moving the concepts on SaaS and cloud in a pragmatic way for such large businesses as Flextronics and Chiquta brands. I'm especially keen to see how the BI and analytics values help to undergird the SOA and cloud innovations that Workday has built into their systems from the very beginning.

It will, in the age of debates about SOA's relavance, be fascinating to see if on-demand providers can bank on the SOA efficiencies and agility, while leveraging cloud models to help customers gain better productivity while also cutting their internal delivery costs and promoting new abstractions of integration and BI.

Friday, January 9, 2009

Predicting vitality of 'SOA' completely misses the point -- legacy IT is dead

While the software market gnashes its teeth over how alive service oriented architecture (SOA) is, the much more important opportunity -- and perhaps unique in the history of IT -- is being overlooked.

There's never better a better time to kill off your legacy IT systems.

The next two years presents the architects and strategists of enterprise IT an unprecedented and probably not to be repeated chance to re-factor the way they do business. Microsoft CEO Steve Ballmer is mulling over the implications of a "reset," rather than a recession, and it is the correct way to look at this period.

Here's why: It's long been an uncomfortable reality that the means of computing for the past 20 years have piled up inside of data centers, expensive and outdated but too complex and costly to replace. Nobody has wanted to rip and replace because of the transition pain and uncertainty. The old guard has been presented with a lot of good excuses for simply bearing the load of aging systems' costs while still piling on more new systems.

We now have the unique option of lowering the tolerance for the ongoing cost of the old, while finding far fewer excuses for putting off the pain of change. In other words, now is the time for rip and replace. But it's more than that, it's time for executing on IT transformation writ large, of moving beyond physical systems and into the hybrid pool of myriad services ... with the end goal of finding the right combination of systems and services for each and every IT problem set.

And defining IT differently needs to be done, too, while we're at it. We need to stop thinking of IT as an attached appendage of each and every specific and isolated enterprise. Yep, 2000 fully operational and massive appendages for the Global 2000. All costly, hugely redundant, unique largely only in how complex and costly they are all on their own.

Instead, IT should be seen as a set of problems to be solved by the best means, and common means should be sought for a great deal of the load. Rather than an IT appendage at each enterprises' actual locations, solutions should be brought to the IT problems by any best means. It means catching up to reality. The reality is that the boundaries of IT are permeable, malleable and dynamic. Being caught in the old world of on-premises and monolithic systems for each application and data set is at odds with what is available efficiently as services -- internal, external and hybrid. Corporations have long sought these methods for procuring other business services, and IT is no different.

As Moore's Law and other modern IT productivity improvements have drastically cut the cost of newer IT solutions and technologies, it has made the costs of maintaining the legacy systems all the higher. The personnel and maintenance arsenals these systems require simply to keep producing a static productivity benefit are in a word ... wasteful. We have been through a long period of spending a lot of money at integration capabilities to extend the value of aging systems by trying to conjure a multiplier effect of 1 system integrated to 1 system equals 3 systems value. More often than not the math does not return a high enough rate of return.

These effects have given enterprise IT the stigma of black box cost centers. The business strategists feel IT is an extortion racket. They fear more of the same, but now with less corporate revenues and therefore less IT budget to work with.

And that's where SOA's death comes in. If the ROI on the money spent to achieve SOA benefits is not overwhelming, or transparent, the logic goes, then the effort is mute, dead, not worth the pain. Actually the opposite is true, and now more than ever.

The analogy of trying to change the wings of an airplane while keeping it flying is often used to describe the quandary of re-architecting your IT universe to an appreciable level while also meeting the SLAs and expanding the capacity and reliability of the older systems. In other words, the relentless pressure of keeping up with growth in the use of and demand on IT has handicapped the task of modernization. With a tight budget since 2001, many IT departments are far too busy putting out the fires of meeting demand to be much involved with resetting the way in which IT is conducted.

One of the chief pain points in avoiding the rip and replaces necessary to move aggressively to modern IT services -- services for integration/interoperability, for infrastructure resources, for app dev, for data management, for software as a service (SaaS), for cloud, for hybrids, for business process modeling, for many aspects of the IT lifecycle -- is that IT has been too busy, too stretched. It's consequently perceived as too risky ... Too hard to sell to the bean counters. Conventional logic holds that this only get worse in a recession.

Increasing demand for IT performance has been a convenient excuse to stay the legacy course, keep investments in new technology modest -- after all, we don't have the time to absorb SOA properly. Let's just get more Band-Aids. This all, like a giant Ponzi scheme, works quite well when the economy and profits are growing. We now know those days are over for some considerable period of time.

And so here we may find the silver lining, for IT strategists at least, in the rapid and severe economic contraction now upon us. A confluence of variables should tip the scales to make IT transformation more practical and attainable than ever. But you may only have a year or two to capitalize on this opportunity.

I suggest that IT organizations look to a new breed of triage for their existing IT universe, and cull out and rip out as much as possible. Kill it. Seek the newer -- dare I say it -- vital SOA and SaaS/cloud alternatives. Examine how open source software and models make sense. Liberally deploy virtualization. Look at how virtual desktop infrastructure (VDI) makes sense for more workers. Examine how a netbook or mobile device can fill the needs of more users in more places in more ways.

And here's the key: Actually define the business processes you need to support first, then identify the resources that the users and customers need to act on these processes, and procure and integrate the IT services (however best available) to fulfill the model. Repeat. Reuse assets as appropriate. Govern the whole she-bang centrally with policy and automation as the goal. This is SOA. It is not dead.

At the same time, boot up the next generation data centers that can play in hybrid services models and at lean total costs -- and inject as much logic and data from the older systems as possible. Use the SaaS and cloud options available now to replace the older systems, and then decide the best medium-term means to produce or procure those services. Find a governance model that allows you to manage the services and resources regardless of where they reside or how they are procured. Rip out and kill the remaining legacy systems that cost you dearly and provide static or declining productivity. You can do this. Now is the time. Be brave.

Many companies will slowly go under, go bankrupt or plunge into a new ownership form that produces the ultimate reset for IT. It is the off button. Just turn the IT off and sell the hardware on eBay. Those firms -- or remaining valued elements of the old firms -- that emerge from the ashes of these drastic business restructurings will also get an abrupt IT reset. They may be able to begin anew with services as central, with SaaS as pervasive, and with cloud-based app dev the norm. But that's some tough sledding to get to more productive and agile IT.

Many more companies will see a period of reduced demand on the IT systems. If you lay off 15% of the workforce, there's bound to be more slack in the demand on applications (once you've provisioned the employees off properly). If your revenues decline by 30%, there will be more slack on the demand on applications and data servers. If you merge with another company, there's a lot of IT redundancy to remove.

You no longer have the excuse of being too busy and too capacity-strained to entertain those ultimately productivity-rich systems resets and to embrace SOA. And you can attain the IT modernization benefits now at far lower capital costs because you can bargain stridently and successfully with the integrators, hardware vendors, software providers, and all the rest. You find qualified employees to hire. You can seek out more IT services on a per-user, per-month subscription model. No need to pay for the IT behind those gone 15% of laid-off employees until you rehire them, and then watch your IT costs become far more commensurate to your actual needs. You won't need the huge capital outlays first, and the productivity later.

Those companies that make this transition now will be powerfully more agile, with lower total IT costs and the ability to swiftly exploit new SOA, SaaS and cloud innovations over the coming years. You need to both survive the recession and position yourself to dominate afterward, in the brave new world. You'll need the right IT mentality and models to do it.

So the actual costs of meaningful change in IT for the next few years will come at an historically low real cost, with very high rates of return after the transition. And the portion of IT spend devoted to capital outlays will decline. And you can bargain (perhaps even push out payments for 6 months or a year) on the professional services, integrators, outsourcers, and other transitional expenses. Other aspects of the global economy are facing a reset, as are governaments, and IT should be a leader not a laggard -- both as an example and as an enabler to the larger transitions.

Now is the time to rip and replace your thinking about IT, and so you'll want to replace your legacy systems and obsolete IT solution models with vital and efficient SOA processes and hybrid IT resources acquisition models.

Actually, now to think of it, high-cost and lock-in legacy IT is what is really dead, finally. RIP.

Wednesday, January 7, 2009

Webinar: IT analysts delve into desktop as service/VDI cloud opportunities for enterprises and telcos

Read a full transcript of the webinar discussion. Listen and watch.

Many of us expect that delivery of the full PC desktop experience and applications as a service will grow in use and value. For many users inside of enterprises, at call centers and for point of sale uses, their requirements can be me with a low-cost thin client and desktop as a service (DaaS) approach. The technology is largely here today.

But stark economics may end up driving the adoption. The value and cost savings from virtualization techniques escalate as they extend from servers to applications to the PC experience itself.

We're also seeing a lot of churn in the concept of the client device itself. The notion of a full-fledged tower PC for every desktop use scenario -- and the associated costs of maintenance, support, security, and upgrades -- is giving way to the right device for the use case. Why not the right software service mix for the right use case too?

For hosting organizations, telcos, cloud services providers and software as a service providers, the allure of providing a complete PC desktop and the required applications as a subscription is enticing. But this is a big subject, and many people are still just wrapping their heads around the implications.

Consequently, I recently participated in a webinar with virtual desktop infrastructure vendor Desktone that I think really gets to some core issues and insights on VDI and the models that support it. Joining me in the discussion are Jeff Fisher, senior director of strategic development at Desktone; Rachel Chalmers, research director of infrastructure management, at The 451 Group, and Robin Bloor, analyst at Hurwitz & Associates.

Here are some excerpts:
Fisher: Clearly, everyone is talking about cloud computing. You can’t look anywhere within IT and not hear about it. It’s amazing to see it surpassing even the frenzy around virtualization. In fact, most of the conversations people are having today are around virtualization and how it can take place in the cloud. Everyone wants to focus on all the benefits, including anytime/anywhere access and subscription economics.

However, like any other major trend that unfolds in IT, there are a number of challenges with the cloud. When people talk about cloud computing with respect to the enterprise, in most cases they’re talking about virtualizing server workloads and moving those workloads into a service provider cloud.

Clearly, that shift introduces a number of challenges. Most notable is the challenge of data security. Because server workloads are very tightly-coupled with their data tier, when you move the server or the server instance, you have to move the data. Most IT folks are not really comfortable with having their data reside in a service provider or external data center.

For that reason Desktone believes that it’s actually going to be virtual desktops, not servers, that are the better place to start and are going to be what jump starts this whole enterprise adoption of cloud computing.

The reason is pretty simple. Most fixed corporate desktop environments -- those are desktops that have a permanent home within your enterprise – already probably have their application and user data abstracted away from the actual desktop. The data is not stored locally. It’s stored somewhere on the network, whether it’s security credentials within the Active Directory (AD), whether it’s home drives that store user data, or it’s the back end of client-server applications. All the back end systems run within your data center.

The other interesting thing is this notion of the service-provider cloud, which is that it actually can traverse both the enterprise and the service provider data centers.

So, depending on the use case, service providers can either keep the virtual infrastructure and the racks powering that virtual infrastructure in their data center or they can, in certain cases, put the physical infrastructure within the enterprise data center, what we call the customer premise equipment model. The most important thing is that it doesn’t break the model.

There is flexibility in the location of the actual hosting infrastructure. Yet, no matter where it resides, whether it’s in a service provider data center or an enterprise data center, the service provider still owns and operates it and the enterprise still pays for it as a subscription.

Chalmers: There are three sensible places to run a desktop virtual machine. One is on the physical client, which gives you a whole bunch of benefits around the ability to encrypt and lock down a laptop and manage it remotely. One is to run it on the server, which is the very similarly tried-and-tested VM with VDI or Citrix XenDesktop method. That’s appropriate for a lot of these cases, but when you run out of server capacity or storage in the server-hosted desktop virtualization model a lot of companies would like elastic access to off-site resources.

This is particularly appropriate, for example, for retailers who see a big balloon in staffing – short-term and temporary staffing around the holiday seasons, although possibly not this year -- or for companies that are doing things off-shore and want to provide developer desktops in a very flexible way, or in education where companies get big summer classes, for example, and want to fire up a whole bunch of desktops for their students.

This kind of elastic provisioning is exactly what we see on the server virtualization side around cloud bursting. On the desktop side, you might want to do cloud bursting. You might even want to permanently host those desktops up in the cloud with a hosting provider and you want exactly the same things that you want from a server cloud deployment. You want a very, very clean interface between the cloud resources and the enterprise resources and you want a very, very granular charge back in billing.

And so, we see cloud-hosted desktop virtualization as a special case of server-hosted desktop virtualization.

Gardner: I think we’re entering a new era in how people conceive of compute resources. ... What’s happening now is that organizations are starting to re-evaluate the notion that a one-size-fits-all PC paradigm makes sense.

We have lots of different slices of different types of productivity workers. As Rachel mentioned, some come and go on a seasonal basis, some come and go on a project basis. We’re really looking at a slice-and-dice productivity in a new way, and that forces the organization to really re-evaluate the whole notion of application delivery.

When you start moving toward virtualization and you start re-thinking about infrastructure, you start re-thinking the relationship between hardware and software. You start re-thinking the relationship between tools and the deployment platform, as you elevate the virtualization and isolate applications away from the platform, and you start re-thinking about delivery.

If you take the step toward terminal services and delivering some applications across the wire from a server-based host, that continues to tip this a little bit toward, “Okay, if I could do it with a couple of apps, why not look at more? If I could do it with apps, why not with desktop? If I can do it with one desktop, why not with a mobile tier?”

If we look at the cost pressures that organizations are under, recognizing that it’s maintenance and support, and risk management and patch management that end up being the lion’s share of the cost of these systems, we’re really at a compelling point where the cost and the availability of different alternatives has really sparked sort of a re-thinking.

So we’re really going through this period of transformation, and I think that virtualization has been a catalyst to VDI and that VDI is therefore a catalyst into cloud. If you can do it through your servers, somebody else can do it through theirs.

When we start really seeing total costs tip as a result, the delta between doing it yourself and then doing it through some of these newer approaches is just super-compelling. Now that we’re entering into an economic period, where we’re challenged with top-line and bottom-line growth, people are not going to take baby steps. They’re going to be looking for transformative, real game-changing types of steps.

This is particularly relevant if they’re commodity level types of applications and services. It could be communications and messaging, it could be certain accounting or back office functions. It just makes a lot of sense to start re-evaluating. What we haven’t seen, unfortunately, is some clear methodologies about how to make these decisions and boundaries inside of organizations with any sort of common framework or approach.

It’s still a one-off company by company approach -- which workers should we keep on a full-fledged PC? Who should we put on a mobile Internet device, for example? Who could go into a cloud-based applications hosting type of scenario that you’ve been describing?

It’s still up in the air and I’m hoping that professional services and systems integrators over the next months and years will actually come up with some standard methodologies for going in and examining the cost-benefit analysis, what types of users and what types of functions and what types of applications it makes sense to put into these different ... environments.

Bloor: One of the things that is really important about what’s happening here with the virtualization of the desktop is just the very simple fact that desktop costs have never been well under control.

The interesting thing is that with end users that we’ve been talking to earlier this year, when they look at their user populations, they normally come to the conclusion that something like 70 or 80 percent of PC users are actually using the PC in a really simple way. The virtualization of those particular units is an awful lot easier to contemplate than the sophisticated population of heavy workstation use and so on.

With the trend that’s actually in operation here, and especially with the cloud option where you no longer need to be concerned about whether your data center actually has the capacity to do that kind of thing, there’s an opportunity with a simple investment of time to make a real big difference in the way the desktop is managed.

... From the corporate point of view, if you’re somebody that’s running a thousand desktops or more it’s a problem. It is a problem in terms of an awful lot of things but mostly it’s a support issue and it’s a management issue. When you get an implementation that involves changing the desktop from a PC to a thin client and you don’t put anything into the data center, it improves.

You’ve now got a situation where you don’t need cages in the data center running PC blades or running virtualized blades to actually provide the service. You don’t need to implement the networking stuff, the brokering capability, boost the networking in case it’s clashing with anything else, or re-engineer networks.

All you do is you go straight into the cloud and you have control of the cloud from the cloud. It’s not going to be completely pain free obviously, but it’s a fairly pain-free implementation.

It absolutely stunned me that a cloud offering became available earlier this year because that meant that somebody would actually have had to have been thinking about this two years ago in order to put together the technology that would enable such an offering like that.

Gardner: This is a much more lucid and rational architecture. We’ve found ourselves, over the past 15 or 20 years, sort of the victim of a disjointed market roll out. We really didn’t anticipate the role of the Internet, when client-server came about. Client-server came about quickly just after local area networks (LANs) were established.

We really hadn’t even rationalized how a LAN should work properly, before we were off and running into bringing browsers in TCP/IP stacks. So, in a sense, we've been tripping over and bouncing around from one very rapid shift in technology to another. I think we’re finally starting to think back and say, “Okay, what’s the real rational, proper architectural approach to this?”

We recognize that it’s not just going to be a PC on every desktop. It’s going to be a broadband Internet connection in every coat pocket, regardless of where you are. That fundamentally changes things. We’re still catching up to that shift.

Bloor: From an architect’s point of view, if nobody had influenced you in any way and you were just asked to draw out a sense of a virtualization of services to end users, you would probably head in this direction. I have no doubt about it. I’ve been an architect in my time, and it’s just very appealing. It looks like what Desktone DaaS has here is resources under control, and we’ve never had that with a PC.
Read a full transcript of the webinar discussion. Listen and watch.

Monday, January 5, 2009

A technical look at how parallel processing brings vast new capabilities to large-scale BI and data analysis

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Greenplum.

Read a full transcript of the discussion.

Internet-scale data collecting, swarms of sensors outputs, and content clouds from the mobile device fabric -- as well as enterprises piling up ever more kinds of analytics metadata to analyze -- have stretched traditional data-management models to the breaking point.

Yet advances in parallel processing using multi-core chipsets have prompted new software approaches such as MapReduce that can handle these data chores at surprisingly low total cost. The technical response to oceans of data is something that has been building for some time. But the time now seems ripe to bring the technical solutions of lower-cost parallel computing advances into play with the economic imperatives of huge data crunching requirements.

And so just what are the technical underpinnings that support the new demands being placed on, and by, extreme data sets? What economies of scale can we anticipate? How will these advances spur the movement of data to Internet cloud models?

BriefingsDirect's Dana Gardner put these and other questions to a panel of new data architecture experts, to plumb into how parallelism, modern data infrastructure, and MapReduce technologies come together. He spoke with Joe Hellerstein, professor of computer science at UC Berkeley; Robin Bloor, analyst at Hurwitz & Associates, and Luke Lonergan, CTO and co-founder at Greenplum.

Here are some excerpts:
Data growth has been following and exceeding Moore's Law over time. What we've been seeing is that the data sets that people are gathering and storing over time have been doubling at a rate of even faster than every 18 months. ... We're going to see all kinds of large organizations gathering data from all sorts of automated sources.

... What's changed in the last few years is that clock speeds on processors have stopped doubling every 18 months. ... Instead, what they are doing is putting more processing cores on every chip. You can expect the number of processors on your chip to double every 18 months, but they're not going to get any faster.

So data is growing faster, and we have chips basically standing still, but you're getting more of them. If you want to take advantage of that data, you're going to have to program in parallel to make use of all those processors on the chips. That's the confluence that's happening.

There are very many people storing and analyzing more data. We're very encouraged that most of our customers are finding new uses for data that are earning them more money. Consequently, the driver to analyze more and more data continues to grow. As our customers get more successful, this use of data is becoming really important.

It's easy to parallelize the data. You break it up into little chunks and you throw it out to different machines. What can we do cleverly in computing with that kind of a framework? There are a lot of ideas for how to move forward ... where you are taking this massively parallel data-flow approach.

One thing that's kind of invisible is that there is a lot of data out there that's not being analyzed fast enough to be analyzed effectively. That's something that I think parallelism is going to address. ... The only reason not to gather that data is when you run out of affordable processing and storage. Anybody with the budget will have as much data as they can budget for and will try to monetize that. It's going to be pervasive.

The core problem we've solved is the ability for our engine to redistribute the data and the computation on the fly, as these queries and analysis are being performed. ... The combination of the software-switch interconnect, which Greenplum built into the Greenplum product, and the underlying use of commodity parallel computers, is brought together in this database system that makes it possible to do SQL query and languages like MapReduce with automatic parallelism.

Businesses have invested a tremendous amount of their time over the last 15 to 25 years in SQL, and some of the more traditional kinds of business analysis that pay off very well are ensconced in that programming model. So, packaging a system that can do transactional, mixed workloads with large amounts of concurrency, with applications that use the SQL paradigm, is very important.

Packaging this together as software plus hardware, making that available as a reference architecture for customers, has been very important and has been very successful in our accounts at New York Stock Exchange, Fox, MySpace, and many others.

The combination of SQL and MapReduce in a unified way in programming environments ... is a very pragmatic [step] that can help with people's ability to get their hands on data in an organization. ... You want to have the same access to all your data via either an SQL interface or a MapReduce programming interface. ... You ought to be able to access those with whatever language suits you, mix and match.

Some things are easier to do in MapReduce, and some things are easier to do in SQL, even when you know both. Good programmers have a lot of tools in their tool belt. They like to be able to use whatever tool is appropriate for the task. Having both of these things interleaved is really quite helpful.

[The solution] is about users being able to gain access to all that power. What really turned the corner for general data analysis using SQL is the ability for a user to not to have to worry about what kind of table structure they have. They can have lots of small tables joining to lots of big tables, and big tables joining to each other.

What the developer needs is an engine that doesn't care how the data is distributed, per se, just being able to use all of that parallelism on the problems of interest. ... The physical model of how the database is distributed in a shared nothing architecture in a Greenplum system is not visible to the developer.

There are a couple of questions about how an individual organization's data will end up in the cloud. Inevitably it will, but in the short-term, people like to keep their data close, particularly database data that's traditionally been in the warehouses, very carefully managed. ... It's going to be some time until we really see everybody's data warehouses up in the cloud. ... How long will it be until you really get big volumes of data in the cloud[?] The answer is that certainly new applications will be up there. We may start to see old data getting uploaded in the cloud as well.

We'll start to see big data sets up there that don't necessarily belong to anyone, and they are going to be big. In that environment, you can imagine big data analytics will have to run in the cloud, because that's where the data will be. One of the fun things about the cloud that's really exciting is the elasticity of the resources. You don't buy yourself a data center full of machines, but you rent as many machines as you need for a task.

If you have a task that's going to look at a lot of data, you would rent a lot of machines for a few hours, and then you would shrink your pool. What this is going to allow people to do is that even small organizations may, for a short period of time, look at an enormous amount of data, which perhaps doesn't originate in their own data production environment, but is something that they want to utilize for their purposes.

Disk densities show no signs of slowing down. So, data is going to be essentially no cost. The data-gathering infrastructure is also going to be mechanized. We're going through what I call the industrial revolution of data production. We're just going to build machines to generate data, because we think we can get value out of that data, and we can store it essentially for free.

The compute cost of multi-core with parallelism is going to continue Moore's Law. It's just going to continue it in a parallel programming environment. If we can get all those cores looking at all that data, it won't cost much to do that, and the cost of that will continue to shrink by half.

The only real barrier to the process is to make those systems easy to program and manageable. Cloud helps somewhat with manageability, and programming environments like SQL and MapReduce are well-suited to parallelism. We're going to just see an enormous use of data analysis over time. It's just going to grow, because it gets cheaper and cheaper and bigger and bigger.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Greenplum.