Wednesday, June 3, 2009

TIBCO takes PaaS plus integration capabilities to global enterprises via their choice of clouds

Combining platform as a service (PaaS) with a built-in ability to integrate and manage hybrid cloud deployments, TIBCO Software today threw its hat in the cloud computing ring by taking its middleware and Java development and deployment platforms to new heights.

Coinciding with the JavaOne conference and coming on the heels of other PaaS announcements this week, TIBCO debuted TIBCO Silver via an online virtual conference. While general availability of Silver is not due until 2010, private beta launches begin this month, as the start of a rolling series of expanding beta launches this year. [Disclosure: TIBCO is a sponsor of BriefingsDirect Analyst Insights Edition podcasts.]

TIBCO's take on PaaS is notable for its emphasis on global 2000 enterprises, with a target on custom business applications, and with a largely automated means to integrate cloud applications and processes with on-premises IT resources and data. Silver is also working to appeal to corporate developers via initial support of Java, Spring, POJO, and Ruby. Later, Silver will add support for Python, Pearl, C, C++, and .NET. That covers a lot of development territory.

As for deployment, TIBCO is starting out on Amazon EC2, but will provide ease in portability for applications and services built with Silver onto other popular clouds provider offerings. The goal is to provide a universal middleware and runtime platform layer -- as a service or as a private install -- that can accommodate mainstream corporate developers with the choice of any major cloud and any major tool and framework, said Rourke McNamara, product marketing director at TIBCO.

"ISVs are welcome, but we're focusing on global IT with Silver," said McNamara.

Relying heavily on the Active Matrix platform, TIBCO is making Silver "self-aware" for cloud use by largely automating provisioning and elastic accommodation of required applications support infrastructure. With the emphasis on enterprises, TIBCO is also building in governance, security, and the ability to meter services based on policies and service level agreements, they said.

"Because it is self-aware, TIBCO Silver manages the performance of the application and dynamically deploys additional resources to ensure the SLA is met," said TIBCO, in a release. TIBCO Silver manages the performance of the application and dynamically deploys additional resources to ensure the SLA is met, said the Palo Alto, CA-based supplier.

Pricing and the business model around TIBCO Silver have not been finalized.

What's more, Silver is leveraging TIBCO's heritage in complex event processing (CEP), SCA composition, BPEL-based orchestration and SOA governance to enhance the automation of application performance and extensibility, even while running in third-party clouds, sais McNamara. TIBCO's CEP engine is embedded into Silver, to allow for policy-based rules to manage how cloud-based applications can be accessed, used, metered and scaled based on a variety of use cases and business process variables.

[UPDATE: At today's live online conference Matt Quinn, Vice President of Product Management and Strategy, TIBCO, showed demonstrations of using SIlver with Ruby on Rails and automation of cloud deployments.

Subject-based addressing in Silver allows for information flows, like mashups, but extends publishing into app dev. Cool.

Silver has three parts: design studio, "intelligent" deployment, and administrator.

Studio allows for app and/or services from model to configuration. Allows for highly visual, or for tools and coding choices based on developer preferences. Composition comes from outside tools, but linking of components can be automated via Silver.

Governance gets baked in via Silver Center Console and Administrator, says Quinn. The steps seem pretty seamless through the steps of development, deployment and configuration.

Quinn wraps up by showing how SOA middleware when applied to cloud provides the best of SOA with the economics and RAD that customers want. SOA plus cloud get the job done.

Werner Vogels, CTO,, appearing at TIBCO's live event (follow on Twitter at #TIBCOSilver).]

HP tackles a variety of current IT challenges and advances in free three-day 'virtual conference'

Given the current economic downturn and tightening budgets, frugal is good and free is better. Hewlett-Packard (HP) is addressing the new IT value era with a series of complimentary international virtual events that will give IT professionals online access to briefings on business and technology trends from HP executives and outside experts.

Next week, June 8-10, HP will kick off the series with the HP Solutions Virtual Event for Europe, the Middle East, and Africa (EMEA). The three-day session will feature 30 breakout sessions, seminars, presentations and demo theater presentations. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Registration for the even is free and, because it's presented entirely online, there are no travel expenses or out-of-office time involved. Also, the full conference will be on replay until September. Future virtual events will be offered for Asia Pacific and Japan, June 16-18, and the Americas, July 28-30.

Next week's breakouts will include four main IT themes -- Data Center Transformation, Service Management, Information Management, and Applications and SOA Transformation -- as well as two leadership themes -- green IT and cloud computing. The virtual presentation will also include chat sessions with the many prominent speakers.

Among the topics to be covered are such current concerns as:
  • Data center initiatives

  • Rethinking virtualization

  • IT energy efficiency

  • The new era in service management

  • ITIL v3

  • Surviving in a world of hyper disruption

  • Cloud and your business technology ecosystem

  • Platforms for tomorrow's business technology

The full list of sessions (posted in Central European Time) is available on the virtual event Web site. Participants are free to attend for one session, one day, or the entire three-day event. These presentations and knowledge resources are not just for HP users, they make sense for the vast HP partner community and full ecology of related providers.

The speakers include a who's who of HP technology thought leaders, including many who are familiar to BriefingsDirect readers and listeners. These include John Bennett, Bob Meyer, Rebecca Lawson, Russ Daniels, Lance Knowlton, and Paul Evans, all of whom have appeared in BriefingsDirect podcasts.

For those interested, HP is providing an online demo that can be accessed prior to the event. The demo is available at:

Registration for the event itself is at :

Tuesday, June 2, 2009

Mainframes provide fast-track access to private cloud benefits for enterprises, process ecosystems

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Learn more. Sponsor: CA.

Read a full transcript of the interview.

Enterprises are seeking cloud computing efficiency benefits, subsequent lower total costs, and a highly valued ability to better deliver flexible services that support agile business processes.

Turns out so-called private clouds, or those cloud computing models that enterprises deploy and/or control on-premises, have a lot in common with longstanding mainframe computing models and techniques. Back to the future, you might say.

New developments in mainframe automation and other technologies increasingly support the use of mainframes for delivering cloud-computing advantages -- and help accelerate the ability to solve recession-era computing challenges around cost, power, energy use and reliability.

More evidence of the alignment between mainframes, mainframe automation and management, and cloud computing comes with today's announcement that CA has purchased key assets of Cassatt Corp., maker of service level automation and service level agreement (SLA) management software.

I had the pleasure to recently learn more about how the mainframe is in many respects the cloud in a sponsored podcast interview with Chris O'Malley, executive vice president and general manager for CA's Mainframe Business Unit.

Here are some excerpts:
Gardner: What makes cloud so appealing and feasible right now?

O'Malley: Cloud as a concept is, in its most basic sense, virtualizing resources within the data center to gain that scale of efficiency and optimization. ... Physically there are many, many servers that support the ongoing operations of a business. CFOs and CEOs are starting to ask simple, but insightful, questions about why we need all these servers and to what degree these servers are being utilized.

When they get answers back and it's something like 15, 10, or 5 percent utilization, it begs for a solution to the problem to start bringing a scale of virtualization to optimize the overall data center to what has been done on the mainframe for years and years.

... It's about both the need from a business standpoint of trying to respond to reduced cost of computing and increased efficiency at a time when the technologies are becoming increasingly available to customers to manage distributed environments or open systems in a way similar to the mainframe.

Larger customers are using their mainframe in a highly virtualized way. They've been doing it for 30 years. It was the genesis of the platform. ... They try to get as much out of it as they possibly can. So, from its beginning, it was virtualized.

The viability of things like, CRM, and the need to coordinate that data with what for most customers is 80 percent of their mission-critical information residing on the mainframe is making people figure out how to fix those problems. It's making this cloud slowly, but pragmatically, come true and become a reality in helping to better support their businesses.

The distributed environment and the open-system environment, in terms of its genesis, was the reverse of what I described in the mainframe. The mainframe, at some point, I think in the early '90s, was considered to be too slow to evolve to meet the needs of business. You heard things like mounting backlog and that innovation wasn't coming to play.

In that frustration, departments wanted their server with their application to serve their needs. It created a significant base of islands, if you will, within the enterprise that led to these scenarios where people are running servers at 15, 10, or 5 percent utilization. That genesis has been the basic fiber of the way people think in most of these organizations.

This 15 or 10 percent utilization is what we consistently see, customer after customer after customer. ... You're seeing the pendulum come back. This is just getting too expensive, too complex, and too hard to keep up with business demands, which sounds a lot like what people's objections were about the mainframe 20 years ago. We're now seeing that maybe a centralized model is a better way to serve our needs.

Gardner: How does that relate to where the modern mainframe is?

O'Malley: The modern mainframe is effectively an on-demand engine. IBM has created now an infrastructure that, as your needs grow, turns on additional engines that are already housed in the box. With the z10, IBM has a platform that is effectively an in-house utility ... With the z10 and the ability to expand capacity on demand, it's very attractive for customers to handle these peaks, but not pay for it all year long.

... The mainframe has always been very good at resilience from a security standpoint. The attributes that make up that which is required for a mission-critical application are basically what make your brand. So, the mainframe has always been the home for those kinds of things. It will continue to be.

We're just making the economics better over time. The attributes that are professed or promised for the cloud on the distributed side are being realized today by many mainframe customers and are doing great work. It's not just a hope or a promise.

Gardner: There is some disconnect, though, cultural and even generational. A lot of the younger folks brought up with the Web, think of cloud applications as being Web applications.

O'Malley: Despite all these good things that I've said about the mainframe, there are still some nagging issues. The people who tend to work on them tend to be the same ones who worked on them 30 years ago. The technology that wraps it hasn't been updated to the more intuitive interfaces that you're talking about.

CA is taking a lead in re-engineering our toolset to look more like a Mac than it does like a green screen. We have a brand new strategy called Mainframe 2.0, which we introduced at CA World last year. We're showing initial deliverables of that technology here in May.

... Our first technology within Mainframe 2.0, is called the Mainframe Software Manager. It's effectively InstallShield for the mainframe. We developed that with 20-somethings. In our Prague data center, we recruited 120 students out of school and they developed that in Java on a mainframe. ... We have 25-year-old people in Prague that have written lines of code that, within the next 12 months, we'll be running at the top 1,000 companies on the face of the earth. There aren't a lot of jobs in life that present you that kind of opportunity.

... The mainframe technologically can do a lot, if not everything you can do on the distributed side, especially with what z/Linux offers. But, we've got to take what is a trillion dollars of investment that runs in the legacy virtual operating system environment and bring that up to 2009 and beyond.

... An open system has its virtues and has its limits. We're raising the abstract to the point where, in a collective cloud, you're just going to use what's best and right for the nature of work you're doing without really even knowing whether this is a mainframe application -- either in z/OS, or z/Linux -- or it's Linux on the open system side or HP-UX. That's where things are going. At that point, the cloud becomes true in the promise where it's being touted at the moment.

To be very honest, it's very important that we bring a cool factor to the mainframe to make it a platform that's equally compelling to any other. When you do that, you create some interesting dynamics to getting the next generation excited about it.
Read a full transcript of the interview.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Learn more. Sponsor: CA.

LongJump eyes fully portable, best-of-breed PaaS holy grail for ISVs going to the 'open' cloud

Even The Economist newspaper is worried about cloud lock-in. And a lot of people talk about open clouds, but not many necessarily do anything about it.

LongJump's announcement today of an enhanced platform-as-a-service (PaaS) offering -- the LongJump Business Applications Platform -- is a strong contender for a best-of-breed cloud computing development and deployment approach that reduces the risk of cloud lock-in.

Designed with independent software vendors (ISVs) that want to go to the cloud in mind, LongJump's PaaS 6.2 version allows applications to be deployed almost anywhere -- on Amazon, Rackspace, on an enterprise data center, or any standards-based runtime stack, says Pankaj Malviya, Founder and CEO of LongJump.

That's flexible deployment among public clouds, private clouds or both (with an ability to manage among them to come in a future release, I'd wager). LongJump’s extensible PaaS gives organizations options in hosting environments including third-party clouds such as Amazon EC2, or a private cloud safely tucked behind the company’s own firewall.

LongJump won't be alone in seeking the holy grail of a truely open, portable, extensible and neutral cloud PaaS approach, but they have my attention. We'll need to keep an eye on Salesforce, TIBCO Software, IBM, and Oracle/Sun on the topic.

For now, LongJump's one-size-fits-all model (to build with Java, Ajax, SOAP, REST, Eclipse) helps ISVs, businesses and developers as they seek a sleek path to cloud-based custom applications. The tools also provide a common way to build thin, fit and fat app UIs.

And integrated modeling, workflow and rules capabilities allow the applications behave as serve components, as parts of extended business processes. Cool.

LongJump, based in Sunnyvale, CA, unveiled the latest version of its LongJump Business Applications Platform at JavaOne in San Francisco. LongJump argues that the “PaaS approach should not dictate loss of control.” Hard to argue with that.

The platform comes with a complete customer relationship management (CRM) solution and also includes a catalog of customizable business applications. Custom development can be done using Java classes, or via visual workflows. The tools support object inheritance across a variety of types.

The catalog offers out-of-the-box but customizable apps for things like relational data management and analysis, form-based applications, resource allocation and management, project fulfillment, approvals and workflow.

LongJump provides “building blocks” of common processes and functions that developers can use to build custom apps to meet the specific needs of their vertical market, without having to reinvent the wheel.

This reuse approach might once have been called service-oriented architecture (SOA) but in the post-death-of-SOA world, the acronym is not mentioned in the LongJump announcement. Last week Malviya told me, however, this is all "built on SOA." That makes me feel better, and it should you, too.

The “extensible PaaS” approach “offers developers and businesses a high level of control, customization and extensibility,” according to the company. It envisions enterprises using its PaaS product to create and industry standards-based cloud apps that include integration of legacy application data.

The pitch for ISVs positions LongJump’s PaaS for developing new software-as-a-service (SaaS) products. Where is Microsoft is this space? Still in the starting blocks ... and we still don't know how portable the code will be, or if Azure will support any variety of runtimes. Unlikely but essential, if you ask me. (And you'd think Miscrosoft would want to attract more than VB developers!)

For enterprise customers the new features support the private cloud approach that may match the comfort level corporate IT departments will have with cloud computing. The focus is on providing the customer with control based on its business rules, rather than ceding power to a PaaS or cloud vendor. Developers are free to create unique applications unfettered by restrictions on deployment options, or branding.

New features from LongJump, which may provide a security blanket for organizations taking a first step on the PaaS path, include:
  • Secure platform for compliance-sensitive applications (CFR 21 Part 11)

  • Digital signature support, including multiple hierarchical signature blocks.

  • Field-level change tracking and auditing and record level event-driven snapshots.
This flexibility was a selling point for The David Allen Company, a professional training, coaching, and management consulting organization, based in Ojai, CA, which selected LongJump over better known PaaS vendors. Robert Peake, CIO at David Allen, said that after evaluating PaaS vendors, LongJump offered “the most flexible application platform” for his company’s unique requirements.

LongJump also caught the eye of Gartner analysts, who deemed it a “Cool Vendor” in an April report on Cloud Computing System and Application Infrastructure, albeit with a caveat that it was the company’s unique approach rather than a detailed product evaluation that warranted the listing.

Part of the LongJump news at JavaOne is it’s support for Sun Microsystems’ – soon to be Oracle’s – MySQL database.

For those interested, LongJump and MySQL are hosting a free webinar, “Developing and Deploying SaaS Applications with MySQL and LongJump” on Thursday, June 11, at 4 p.m. EDT, 1 p.m. PDT. Information and registration is available at

LongJump is a service of Relationals Inc., a privately-held provider of on-demand CRM and sales force automation (SFA) applications with more than 400 enterprise customers.

This company is hot ... in the right place, with the right fuctions at the right time. I can think of several suitors that would do well to jump-start their own cloud strategy with such a PaaS solution.

BriefingsDirect contributor Rich Seeley provides research and editorial assistance to BriefingsDirect. He can be reached at Writer4Hire.

Monday, June 1, 2009

Dana Gardner interviews Forrester's Frank Gillett on future of mission-critical cloud computing

Watch the video. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Learn more. Sponsor: Akamai Technologies.

Read a full transcript of the discussion.

The impact of cloud computing is most often analyzed through its expected disruption of IT vendors, or the media, or as an economic balm for developers and Web 2.0 start-ups.

Yet cloud computing is much more than a just newcomer on the Internet hype curve. The heritage of what cloud computing represents dates back to the dawn of information technology (IT), to the very beginnings of how government agencies and large commercial enterprises first accessed powerful computers to solve complex problems.

We've certainly heard a lot about the latest vision for cloud computing and what it can do for the delivery of applications, services and infrastructure, and for application development and deployment efficiencies. So how does cloud computing fit into the whole journey of the last 35 years of IT? What is the context of cloud computing in the real-world enterprise? How do we take the vision and apply it to today's enterprise concerns and requirements?

To answer these questions, we need to look at the more mundane IT requirements of security, reliability, management, and the need for integration across multiple instances of cloud services. To help understand the difference between the reality and the vision for cloud computing, I recently interviewed Frank Gillett, vice president and principal analyst for general cloud computing topics and issues at Forrester Research.

You can also watch the interview as a series of video streams at, or read a full transcript. The discussion is sponsored by Akamai Technologies.

Here are some excerpts:
Gardner: You know, Frank, the whole notion of cloud computing isn't terribly new. I think it's more of a progression.

Gillett: When I talk to folks in the industry, the old timers look at me and say, "Oh, time-sharing!" For some folks this idea, just like virtualization, harkens back to the dawn of the computer industry and things they've seen before. ... We didn't think of them as cloud, per se, because cloud was just this funny sketch on a white board that people used to say, "Well, things go into the network, magic happens, and something cool comes from somewhere."

So broadly speaking, software as a service (SaaS) is a finished service that end users take in. Platform as a service (PaaS) is not for end users, but for developers. ... Some developers want more control at a lower level, right? They do want to get into the operating system. They want to understand the relationship among the different operating systems instances and some of the storage architecture.

At that layer, you're talking about infrastructure as a service (IaaS), where I'm dealing with virtual servers, virtualized storage, and virtual networks. I'm still sharing infrastructure, but at a lower level in the infrastructure. But, I'm still not nailed to this specific hardware the way you are in say a hosting or outsourcing setup.

Gardner: We're in the opening innings of cloud computing?

Gillett: A lot of the noisy early adopters are start-ups that are very present on the Web, social media, blogs, and stuff like that. Interestingly, the bigger the company the more likely they are to be doing it, despite the hype that the small companies will go first.

... It doesn't necessarily mean that your typical enterprise is doing it, and, if they are, it's probably the developers, and it's probably Web-oriented stuff. ... In the infrastructure layer, it's really workloads like test and development, special computation, and things like that, where people are experimenting with it. But, you have to look at your developers, because often it's not the infrastructure guys who are doing this. It's the developers.

It's the people writing code that say, “It takes too long to get infrastructure guys to set up a server, configure the network, apportion the storage, and all that stuff. I'll just go do it over here at the service provider."

... There is no one thing called "cloud," and therefore, there is no one owner in the enterprise. What we find is that, if you are talking about SaaS, business owners are the ones who are often specing this.

Gardner: Who is the "one throat to choke" if something goes wrong?

Gillett: Bottom line, there isn't one, because there is no one thing. ... They are on their own within the company. They have to manage the service providers, but there is this thing called the network that's between them and the service providers.

It's not going to be as simple as just going to your network provider, the Internet service provider, and saying, "Make sure my network stays up." This is about understanding and thinking about the performance of the network end to end, the public network -- much harder to control than understanding what goes on within the company.

This is where you have to couple looking at your Internet or network service provider with the set of offerings out there for content and application acceleration. What you're really looking for is comprehensive help in understanding how the Internet works, how to deal with limitations of geography and the physics, the speed of light, making sure that you are distributing the applications correctly over the network -- the ones that you control and architect -- and understanding how to work with the network to interact with various cloud-service providers you're using across the network.

... Even though you can't get the uber "one throat the choke," at the network layer you can go for a more comprehensive view of the application, and the performance of the network, which is now becoming a critical part of your business process. You depend on these service providers of various stripes scattered across the Internet.

If you take the notion of service-oriented architecture (SOA), and explode it across the public network, now you need sort of the equivalent of the internal network operation center, but you need help from an outside provider, and there's a spectrum of them obviously to do that.

When you're asking about governance, the governance of the network is really important to get right and to get help with. There is no way for an individual company to try and manage all that themselves, because they are not in the public network themselves.

... I spoke to a luxury goods and perfume maker that had a public website with transactions, as well as content, on their website. I said, "How many servers does it take to run your transactions?" And they said it only takes four, and that includes the two redundant ones. "Oh, really? That's all?" They said, "Well, not really. Three quarters of my workload is with my application and content acceleration provider. They take care of three quarters of my headache. They make it all work." So, that's a great example.

Gardner: What seems to be missing in this notion of trust, governance, and reliability?

Gillett: There's no such thing as "the" cloud provider, or one cloud provider. Part of the complication for IT is, not only do they have multiple parties within the company, which has always been a struggle, as they get into this, they're going to find themselves dealing with multiple providers on the outside.

So, maybe you've got the services still in your IT as an infrastructure. You've got your internal capability. Then, you've got an application, SaaS, and perhaps PaaS, and a business process that somehow stitches all four of those things together. Each one has its own internal complexities and all of it's running over the public network ... So, it's really challenging.

Gardner: How can we integrate across different sets of services from different providers and put them in the context of a business process?

Gillett: If you look at it, a lot of these concepts are embodied in the whole set of ideas around SOA, that everything is manifested as services, and it's all loosely coupled, and they can work together. Well, that works great, as long as you've got good governance over those different services, and you've got the right sort of security on them, the authentication and permissions, and you found the right balance of designing for reuse, versus efficiently getting things done.

... But, as you're hinting, I have to think about how I make that business process work, making sure that I work over the Internet? What do I do if that service provider hiccups or a backhoe cuts a fiber optic cable between me and the service provider?

Now, I'm becoming more dependent on the public Internet infrastructure, once I'm tying into these service providers and tying into multiple parties. Like a lot of things in technology, unless you're going to completely turn over everything to an outside service provider, which sounds like traditional outsourcing to me, the "one throat to choke" is your own. ... If you think about it, it's not that different than when I ran all the infrastructure on my own premises, because I had gear and applications from different parties, and, at the end of day, it was up to me to referee these folks and get them to work together.

... So I look at this, and I say, "Okay, we've got a decade here to sort this out." It's a completely different problem, by the way, to think about how I take the existing applications I run inside my company, and think about migrating them to a service provider.

Gardner: We have a track record of organizations saying, "Listen, I don't want to be in the commodity applications business. I want to specialize in what's going to differentiate me as an enterprise. I don't want to have everyone recreating the same application instance."

Gillett: ... "You said, "Cloud is about commoditizing IT, and only things that aren't differentiating leave my company." Not true. ... Cloud services can handle mission-critical workloads, things that differentiate you. In fact, that might only be possible if you do them in a service provider.

... Let me give you an example. Let's say that your business has critical calculations to run overnight, say, for ad placement on websites. Let's say that that's soaks up huge amounts of computing capacity when you run the workload at night, but sits idle during the day. ... Guess what? That's one of the workloads that runs at Amazon's EC2 IaaS or "computer as a service."

In that case, it's more cost effective and more flexible for them to run it with the service provider, even though it's mission critical. It's a more effective use of resources.

Now, let's flip it around the other way. Take a provider that does streaming of public websites of media. You go to a website of a major newspaper or a television network and you want to see their video. This provider helps with that on the back-end. What they found, when they looked at their internal infrastructure, was that they felt they were cheaper than the Amazon at running their core infrastructure.

Amazon looked like a nice extra capacity on top, so they wouldn't have to buy over provision as much. Amazon also looked like a great way to add capacity into new regions before they got critical mass to do it cost effectively themselves in that region. There are two examples of the non-intuitive ways to think about this.

... It feels like we are in a cloud hype bubble right now. All the hype and noise is sort of on the upswing still, but we are going to see this subside and calm down late this year or next year. This is not to say that the ideas aren't good. It's just that it will take a significant amount of time to sort things out, figure out the right choices for the offerings to mature, for the early adopters to get in, the mainstream folks, and the laggards. It's only as we get deeper into it that we even begin to understand the governance ideas.

... You start thinking about how to distribute application logic, to create fast response, good business service levels and things like that, despite the fact that you think, "We're just selling one thing and all that has to come back to a central database." Not necessarily. So, you really start to think about that. You think about how to prioritize things across the network. This is more important than that. All of it is basically fighting the laws of physics, also trying to figure out the speed of light, and all sorts of computation stuff.

It's also trying to figure out the most cost-effective way to do it. Part of what we're seeing is the development and progression of an industry that's trying to figure out how to most cost-effectively deliver something. Over time we'll see changes in the financial structures of the various service providers, Internet, software or whatever, as they try to find the right way to most cost-efficiently deliver these capabilities.

Gardner: How do enterprises put themselves in position to take advantage of cloud sooner rather than later, and perhaps gain a competitive advantage as a result?

Gillett: ... One of the things that we're telling our infrastructure and operations guys is to get in early ahead of the developers. Don't let them run willy-nilly and pick a bunch of services. Work with the enterprise architect, the IT architect, to identify some services that fit your security and compliance requirements. Then, tell the developers, "Okay. Here is the approved ones that you can go play with, and here's how we're going to integrate them."

So, proactively, get out in front of these people experimenting with their credit cards, even if it's uncomfortable for you. Get in early on the governance. Don't let that one run away from you.
Read a full transcript of the discussion.

Watch the video. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Learn more. Sponsor: Akamai Technologies.

Virtual desktop services gain TCO boost with HP clients, Desktone cloud-based 'DaaS' advances

HP and Desktone have made strong market moves into the fast-growing desktops as a service (Daas) space, also know as desktop virtualization and virtual desktop infrastructure (VDI).

These new products and services show an aggressive ramp-up and sophistication to deliver a fast-track to DaaS. HP and Desktone, among others, are banking on the down economy to whet the appetites of many kinds of companies -- and myriad uses -- for these far lower cost approaches for delivering full PC functionality without the full PC and local maintenance headaches.

HP last week announced a novel new notebook-like thin client device, an entry-level price on mobile units to $550 (yowza!), better WiFi support, and enhanced VDI client and server software. The 4410T mobile thin client, which looks just like a notebook PC, runs Windows Embedded Standard.

As the global VDI market leader, HP also updated it desktop thin client lineup with the T5540, which starts at $199 and features an easy update and configuration capability, low engery use, choice of hypervisor (VMware or Citrix Xen), and fuller multimedia support. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The delivery of these new client devices comes on the heels of HP's recent advances in the data center software that supports them. That help the emphasis in the latest round of client announcements on faster and more complete start-ups, as well as better remote administration.

Desktone, after teaming with NetApp earlier this month, last week announced two significant milestones. First, Desktone secured registered trademarks for the terms "desktops as a service" and "DaaS." Second, it launched improvements to Virtual-D, its hosted virtual desktop platform.

Hmmm ... Seems to me I dropped that DaaS moniker on the Chelsmford, MA start-up almost two years ago. They seemd to like it then a lot. And now I'm sincerely flattered. Well, there goes some more free consulting -- validated, if not always renumerative.

Of course, if they asked, I would have told them to add the word "cloud," as in cloud-based DaaS should be in there somewhere. It leaves open the public and private cloud delivery of DaaS for either service providers or large enterprises (or a federated approach of some kind). Oh, well, maybe Desktone's customers will use cloud as they brand on top of Desktone's DaaS picks and shovels.

Before we dive into the improvements, let's look at the Desktone's newly minted, official definition of DaaS: DaaS specifically describes an outsourced subscription service for server-hosted desktops powered by the Desktone Virtual-D Platform and delivered by Desktone-certified service provider partners.

More generally speaking, DaaS is a model that in many ways mimics software-as-a-service (SaaS), where software is outsourced to providers that offer applications on-demand as a subscription service. Desktone's DaaS lets PC users tap into virtual desktop computing with a Windows client experience that leverages existing data center infrastructure and network investments. Yep, sounds cloud-like all right.

"There's been a lot of confusion in the marketplace about what DaaS really is," says Jeff Fisher, senior director of strategic development at Desktone. "Other vendors use the term differently, but we've trademarked these terms to describe the virtual desktop hosting infrastructure that allows our partners to deliver DaaS."

Yep, could not have said it better myself, even if I already did. Of course, this probably means that Desktone alone with use DaaS from now on. Will be interestering to see if Citrix (a Desktone investor) and Microsoft (in the ecology, you could say) will begin using "DaaS" in their literature.

So now that we know what Desktone means when it talks about DaaS, let's look at the "new and improved" Virtual-D. There are four new capabilities for service providers, including multi-tenancy, multi-data center and improved hosting economics. But perhaps the most attractive upgrade for its customers is the Virtual-D Service Center.

Virtual-D Service Center is a single management Web console for service operators to create, manage and monitor many customers on common network, storage, and virtualization infrastructure. Fisher says this is what sets the product apart from the competition.

On the enterprise front, Desktone just pushed out rapid service on-boarding, global language support and delegated administration. The goal here is to let enterprise admins upload virtual desktop images to their service provider's infrastructure more quickly – and in as many as 12 languages. Roles and permissions are also more flexible.

Of course, Desktone isn't the only company vying or a slice of the virtual desktop pie. With IDC predicting market for desktop virtualization software will grow to $1.7 billion by 2011, the competition is heating up. The biggest challenge as Fisher sees it, though, isn't competitors. It's transforming the way users technically consume desktops.

"In the DaaS model, people are consuming desktops from a service provider's cloud," Fisher says. "The question arises, 'Does it make sense to purchase desktop hosting services as a subscription and, if so, at what price point?' So there's a lot of TCO work to convince the market to flip to this model."

Desktone, among others, are working to educate the market. In the meantime, the company is continuing to work on its software offerings. Fisher hinted at a new relationship that would bring critical technology to its platform in the third quarter.

Could it be with IBM?

Maybe Desktone should look at what HP is doing to get all that multimedia and the other mission-critical VDI capabilities to the DaaS cloud, eh? When it comes to IBM and HP, I'd say partner with both.

Freelance IT journalist Jennifer LeClaire provided research and editorial assistance on this BriefingsDirect post. She can be reached at