Wednesday, June 3, 2009

TIBCO takes PaaS plus integration capabilities to global enterprises via their choice of clouds

Combining platform as a service (PaaS) with a built-in ability to integrate and manage hybrid cloud deployments, TIBCO Software today threw its hat in the cloud computing ring by taking its middleware and Java development and deployment platforms to new heights.

Coinciding with the JavaOne conference and coming on the heels of other PaaS announcements this week, TIBCO debuted TIBCO Silver via an online virtual conference. While general availability of Silver is not due until 2010, private beta launches begin this month, as the start of a rolling series of expanding beta launches this year. [Disclosure: TIBCO is a sponsor of BriefingsDirect Analyst Insights Edition podcasts.]

TIBCO's take on PaaS is notable for its emphasis on global 2000 enterprises, with a target on custom business applications, and with a largely automated means to integrate cloud applications and processes with on-premises IT resources and data. Silver is also working to appeal to corporate developers via initial support of Java, Spring, POJO, and Ruby. Later, Silver will add support for Python, Pearl, C, C++, and .NET. That covers a lot of development territory.

As for deployment, TIBCO is starting out on Amazon EC2, but will provide ease in portability for applications and services built with Silver onto other popular clouds provider offerings. The goal is to provide a universal middleware and runtime platform layer -- as a service or as a private install -- that can accommodate mainstream corporate developers with the choice of any major cloud and any major tool and framework, said Rourke McNamara, product marketing director at TIBCO.

"ISVs are welcome, but we're focusing on global IT with Silver," said McNamara.

Relying heavily on the Active Matrix platform, TIBCO is making Silver "self-aware" for cloud use by largely automating provisioning and elastic accommodation of required applications support infrastructure. With the emphasis on enterprises, TIBCO is also building in governance, security, and the ability to meter services based on policies and service level agreements, they said.

"Because it is self-aware, TIBCO Silver manages the performance of the application and dynamically deploys additional resources to ensure the SLA is met," said TIBCO, in a release. TIBCO Silver manages the performance of the application and dynamically deploys additional resources to ensure the SLA is met, said the Palo Alto, CA-based supplier.

Pricing and the business model around TIBCO Silver have not been finalized.

What's more, Silver is leveraging TIBCO's heritage in complex event processing (CEP), SCA composition, BPEL-based orchestration and SOA governance to enhance the automation of application performance and extensibility, even while running in third-party clouds, sais McNamara. TIBCO's CEP engine is embedded into Silver, to allow for policy-based rules to manage how cloud-based applications can be accessed, used, metered and scaled based on a variety of use cases and business process variables.

[UPDATE: At today's live online conference Matt Quinn, Vice President of Product Management and Strategy, TIBCO, showed demonstrations of using SIlver with Ruby on Rails and automation of cloud deployments.

Subject-based addressing in Silver allows for information flows, like mashups, but extends publishing into app dev. Cool.

Silver has three parts: design studio, "intelligent" deployment, and administrator.

Studio allows for app and/or services from model to configuration. Allows for highly visual, or for tools and coding choices based on developer preferences. Composition comes from outside tools, but linking of components can be automated via Silver.

Governance gets baked in via Silver Center Console and Administrator, says Quinn. The steps seem pretty seamless through the steps of development, deployment and configuration.

Quinn wraps up by showing how SOA middleware when applied to cloud provides the best of SOA with the economics and RAD that customers want. SOA plus cloud get the job done.

Werner Vogels, CTO, Amazon.com, appearing at TIBCO's live event (follow on Twitter at #TIBCOSilver).]

HP tackles a variety of current IT challenges and advances in free three-day 'virtual conference'

Given the current economic downturn and tightening budgets, frugal is good and free is better. Hewlett-Packard (HP) is addressing the new IT value era with a series of complimentary international virtual events that will give IT professionals online access to briefings on business and technology trends from HP executives and outside experts.

Next week, June 8-10, HP will kick off the series with the HP Solutions Virtual Event for Europe, the Middle East, and Africa (EMEA). The three-day session will feature 30 breakout sessions, seminars, presentations and demo theater presentations. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Registration for the even is free and, because it's presented entirely online, there are no travel expenses or out-of-office time involved. Also, the full conference will be on replay until September. Future virtual events will be offered for Asia Pacific and Japan, June 16-18, and the Americas, July 28-30.

Next week's breakouts will include four main IT themes -- Data Center Transformation, Service Management, Information Management, and Applications and SOA Transformation -- as well as two leadership themes -- green IT and cloud computing. The virtual presentation will also include chat sessions with the many prominent speakers.

Among the topics to be covered are such current concerns as:
  • Data center initiatives

  • Rethinking virtualization

  • IT energy efficiency

  • The new era in service management

  • ITIL v3

  • Surviving in a world of hyper disruption

  • Cloud and your business technology ecosystem

  • Platforms for tomorrow's business technology

The full list of sessions (posted in Central European Time) is available on the virtual event Web site. Participants are free to attend for one session, one day, or the entire three-day event. These presentations and knowledge resources are not just for HP users, they make sense for the vast HP partner community and full ecology of related providers.

The speakers include a who's who of HP technology thought leaders, including many who are familiar to BriefingsDirect readers and listeners. These include John Bennett, Bob Meyer, Rebecca Lawson, Russ Daniels, Lance Knowlton, and Paul Evans, all of whom have appeared in BriefingsDirect podcasts.

For those interested, HP is providing an online demo that can be accessed prior to the event. The demo is available at: http://tsgdemo.veplatform.com/uc/registration-short-form.php

Registration for the event itself is at : http://hpsolutionsforneweconomyvirtualevent.veplatform.com/?mcc=EMEA.

Tuesday, June 2, 2009

Mainframes provide fast-track access to private cloud benefits for enterprises, process ecosystems

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: CA.

Read a full transcript of the interview.

Enterprises are seeking cloud computing efficiency benefits, subsequent lower total costs, and a highly valued ability to better deliver flexible services that support agile business processes.

Turns out so-called private clouds, or those cloud computing models that enterprises deploy and/or control on-premises, have a lot in common with longstanding mainframe computing models and techniques. Back to the future, you might say.

New developments in mainframe automation and other technologies increasingly support the use of mainframes for delivering cloud-computing advantages -- and help accelerate the ability to solve recession-era computing challenges around cost, power, energy use and reliability.

More evidence of the alignment between mainframes, mainframe automation and management, and cloud computing comes with today's announcement that CA has purchased key assets of Cassatt Corp., maker of service level automation and service level agreement (SLA) management software.

I had the pleasure to recently learn more about how the mainframe is in many respects the cloud in a sponsored podcast interview with Chris O'Malley, executive vice president and general manager for CA's Mainframe Business Unit.

Here are some excerpts:
Gardner: What makes cloud so appealing and feasible right now?

O'Malley: Cloud as a concept is, in its most basic sense, virtualizing resources within the data center to gain that scale of efficiency and optimization. ... Physically there are many, many servers that support the ongoing operations of a business. CFOs and CEOs are starting to ask simple, but insightful, questions about why we need all these servers and to what degree these servers are being utilized.

When they get answers back and it's something like 15, 10, or 5 percent utilization, it begs for a solution to the problem to start bringing a scale of virtualization to optimize the overall data center to what has been done on the mainframe for years and years.

... It's about both the need from a business standpoint of trying to respond to reduced cost of computing and increased efficiency at a time when the technologies are becoming increasingly available to customers to manage distributed environments or open systems in a way similar to the mainframe.

Larger customers are using their mainframe in a highly virtualized way. They've been doing it for 30 years. It was the genesis of the platform. ... They try to get as much out of it as they possibly can. So, from its beginning, it was virtualized.

The viability of things like salesforce.com, CRM, and the need to coordinate that data with what for most customers is 80 percent of their mission-critical information residing on the mainframe is making people figure out how to fix those problems. It's making this cloud slowly, but pragmatically, come true and become a reality in helping to better support their businesses.

The distributed environment and the open-system environment, in terms of its genesis, was the reverse of what I described in the mainframe. The mainframe, at some point, I think in the early '90s, was considered to be too slow to evolve to meet the needs of business. You heard things like mounting backlog and that innovation wasn't coming to play.

In that frustration, departments wanted their server with their application to serve their needs. It created a significant base of islands, if you will, within the enterprise that led to these scenarios where people are running servers at 15, 10, or 5 percent utilization. That genesis has been the basic fiber of the way people think in most of these organizations.

This 15 or 10 percent utilization is what we consistently see, customer after customer after customer. ... You're seeing the pendulum come back. This is just getting too expensive, too complex, and too hard to keep up with business demands, which sounds a lot like what people's objections were about the mainframe 20 years ago. We're now seeing that maybe a centralized model is a better way to serve our needs.

Gardner: How does that relate to where the modern mainframe is?

O'Malley: The modern mainframe is effectively an on-demand engine. IBM has created now an infrastructure that, as your needs grow, turns on additional engines that are already housed in the box. With the z10, IBM has a platform that is effectively an in-house utility ... With the z10 and the ability to expand capacity on demand, it's very attractive for customers to handle these peaks, but not pay for it all year long.

... The mainframe has always been very good at resilience from a security standpoint. The attributes that make up that which is required for a mission-critical application are basically what make your brand. So, the mainframe has always been the home for those kinds of things. It will continue to be.

We're just making the economics better over time. The attributes that are professed or promised for the cloud on the distributed side are being realized today by many mainframe customers and are doing great work. It's not just a hope or a promise.

Gardner: There is some disconnect, though, cultural and even generational. A lot of the younger folks brought up with the Web, think of cloud applications as being Web applications.

O'Malley: Despite all these good things that I've said about the mainframe, there are still some nagging issues. The people who tend to work on them tend to be the same ones who worked on them 30 years ago. The technology that wraps it hasn't been updated to the more intuitive interfaces that you're talking about.

CA is taking a lead in re-engineering our toolset to look more like a Mac than it does like a green screen. We have a brand new strategy called Mainframe 2.0, which we introduced at CA World last year. We're showing initial deliverables of that technology here in May.

... Our first technology within Mainframe 2.0, is called the Mainframe Software Manager. It's effectively InstallShield for the mainframe. We developed that with 20-somethings. In our Prague data center, we recruited 120 students out of school and they developed that in Java on a mainframe. ... We have 25-year-old people in Prague that have written lines of code that, within the next 12 months, we'll be running at the top 1,000 companies on the face of the earth. There aren't a lot of jobs in life that present you that kind of opportunity.

... The mainframe technologically can do a lot, if not everything you can do on the distributed side, especially with what z/Linux offers. But, we've got to take what is a trillion dollars of investment that runs in the legacy virtual operating system environment and bring that up to 2009 and beyond.

... An open system has its virtues and has its limits. We're raising the abstract to the point where, in a collective cloud, you're just going to use what's best and right for the nature of work you're doing without really even knowing whether this is a mainframe application -- either in z/OS, or z/Linux -- or it's Linux on the open system side or HP-UX. That's where things are going. At that point, the cloud becomes true in the promise where it's being touted at the moment.

To be very honest, it's very important that we bring a cool factor to the mainframe to make it a platform that's equally compelling to any other. When you do that, you create some interesting dynamics to getting the next generation excited about it.
Read a full transcript of the interview.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: CA.

LongJump eyes fully portable, best-of-breed PaaS holy grail for ISVs going to the 'open' cloud

Even The Economist newspaper is worried about cloud lock-in. And a lot of people talk about open clouds, but not many necessarily do anything about it.

LongJump's announcement today of an enhanced platform-as-a-service (PaaS) offering -- the LongJump Business Applications Platform -- is a strong contender for a best-of-breed cloud computing development and deployment approach that reduces the risk of cloud lock-in.

Designed with independent software vendors (ISVs) that want to go to the cloud in mind, LongJump's PaaS 6.2 version allows applications to be deployed almost anywhere -- on Amazon, Rackspace, on an enterprise data center, or any standards-based runtime stack, says Pankaj Malviya, Founder and CEO of LongJump.

That's flexible deployment among public clouds, private clouds or both (with an ability to manage among them to come in a future release, I'd wager). LongJump’s extensible PaaS gives organizations options in hosting environments including third-party clouds such as Amazon EC2, or a private cloud safely tucked behind the company’s own firewall.

LongJump won't be alone in seeking the holy grail of a truely open, portable, extensible and neutral cloud PaaS approach, but they have my attention. We'll need to keep an eye on Salesforce, TIBCO Software, IBM, and Oracle/Sun on the topic.

For now, LongJump's one-size-fits-all model (to build with Java, Ajax, SOAP, REST, Eclipse) helps ISVs, businesses and developers as they seek a sleek path to cloud-based custom applications. The tools also provide a common way to build thin, fit and fat app UIs.

And integrated modeling, workflow and rules capabilities allow the applications behave as serve components, as parts of extended business processes. Cool.

LongJump, based in Sunnyvale, CA, unveiled the latest version of its LongJump Business Applications Platform at JavaOne in San Francisco. LongJump argues that the “PaaS approach should not dictate loss of control.” Hard to argue with that.

The platform comes with a complete customer relationship management (CRM) solution and also includes a catalog of customizable business applications. Custom development can be done using Java classes, or via visual workflows. The tools support object inheritance across a variety of types.

The catalog offers out-of-the-box but customizable apps for things like relational data management and analysis, form-based applications, resource allocation and management, project fulfillment, approvals and workflow.

LongJump provides “building blocks” of common processes and functions that developers can use to build custom apps to meet the specific needs of their vertical market, without having to reinvent the wheel.

This reuse approach might once have been called service-oriented architecture (SOA) but in the post-death-of-SOA world, the acronym is not mentioned in the LongJump announcement. Last week Malviya told me, however, this is all "built on SOA." That makes me feel better, and it should you, too.

The “extensible PaaS” approach “offers developers and businesses a high level of control, customization and extensibility,” according to the company. It envisions enterprises using its PaaS product to create and industry standards-based cloud apps that include integration of legacy application data.

The pitch for ISVs positions LongJump’s PaaS for developing new software-as-a-service (SaaS) products. Where is Microsoft is this space? Still in the starting blocks ... and we still don't know how portable the code will be, or if Azure will support any variety of runtimes. Unlikely but essential, if you ask me. (And you'd think Miscrosoft would want to attract more than VB developers!)

For enterprise customers the new features support the private cloud approach that may match the comfort level corporate IT departments will have with cloud computing. The focus is on providing the customer with control based on its business rules, rather than ceding power to a PaaS or cloud vendor. Developers are free to create unique applications unfettered by restrictions on deployment options, or branding.

New features from LongJump, which may provide a security blanket for organizations taking a first step on the PaaS path, include:
  • Secure platform for compliance-sensitive applications (CFR 21 Part 11)

  • Digital signature support, including multiple hierarchical signature blocks.

  • Field-level change tracking and auditing and record level event-driven snapshots.
This flexibility was a selling point for The David Allen Company, a professional training, coaching, and management consulting organization, based in Ojai, CA, which selected LongJump over better known PaaS vendors. Robert Peake, CIO at David Allen, said that after evaluating PaaS vendors, LongJump offered “the most flexible application platform” for his company’s unique requirements.

LongJump also caught the eye of Gartner analysts, who deemed it a “Cool Vendor” in an April report on Cloud Computing System and Application Infrastructure, albeit with a caveat that it was the company’s unique approach rather than a detailed product evaluation that warranted the listing.

Part of the LongJump news at JavaOne is it’s support for Sun Microsystems’ – soon to be Oracle’s – MySQL database.

For those interested, LongJump and MySQL are hosting a free webinar, “Developing and Deploying SaaS Applications with MySQL and LongJump” on Thursday, June 11, at 4 p.m. EDT, 1 p.m. PDT. Information and registration is available at http://www.mysql.com/news-and-events/web-seminars/display-354.html

LongJump is a service of Relationals Inc., a privately-held provider of on-demand CRM and sales force automation (SFA) applications with more than 400 enterprise customers.

This company is hot ... in the right place, with the right fuctions at the right time. I can think of several suitors that would do well to jump-start their own cloud strategy with such a PaaS solution.

BriefingsDirect contributor Rich Seeley provides research and editorial assistance to BriefingsDirect. He can be reached at Writer4Hire.

Monday, June 1, 2009

Dana Gardner interviews Forrester's Frank Gillett on future of mission-critical cloud computing

Watch the video. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Akamai Technologies.

Read a full transcript of the discussion.

The impact of cloud computing is most often analyzed through its expected disruption of IT vendors, or the media, or as an economic balm for developers and Web 2.0 start-ups.

Yet cloud computing is much more than a just newcomer on the Internet hype curve. The heritage of what cloud computing represents dates back to the dawn of information technology (IT), to the very beginnings of how government agencies and large commercial enterprises first accessed powerful computers to solve complex problems.

We've certainly heard a lot about the latest vision for cloud computing and what it can do for the delivery of applications, services and infrastructure, and for application development and deployment efficiencies. So how does cloud computing fit into the whole journey of the last 35 years of IT? What is the context of cloud computing in the real-world enterprise? How do we take the vision and apply it to today's enterprise concerns and requirements?

To answer these questions, we need to look at the more mundane IT requirements of security, reliability, management, and the need for integration across multiple instances of cloud services. To help understand the difference between the reality and the vision for cloud computing, I recently interviewed Frank Gillett, vice president and principal analyst for general cloud computing topics and issues at Forrester Research.

You can also watch the interview as a series of video streams at http://www.akamai.com/cloud, or read a full transcript. The discussion is sponsored by Akamai Technologies.

Here are some excerpts:
Gardner: You know, Frank, the whole notion of cloud computing isn't terribly new. I think it's more of a progression.

Gillett: When I talk to folks in the industry, the old timers look at me and say, "Oh, time-sharing!" For some folks this idea, just like virtualization, harkens back to the dawn of the computer industry and things they've seen before. ... We didn't think of them as cloud, per se, because cloud was just this funny sketch on a white board that people used to say, "Well, things go into the network, magic happens, and something cool comes from somewhere."

So broadly speaking, software as a service (SaaS) is a finished service that end users take in. Platform as a service (PaaS) is not for end users, but for developers. ... Some developers want more control at a lower level, right? They do want to get into the operating system. They want to understand the relationship among the different operating systems instances and some of the storage architecture.

At that layer, you're talking about infrastructure as a service (IaaS), where I'm dealing with virtual servers, virtualized storage, and virtual networks. I'm still sharing infrastructure, but at a lower level in the infrastructure. But, I'm still not nailed to this specific hardware the way you are in say a hosting or outsourcing setup.

Gardner: We're in the opening innings of cloud computing?

Gillett: A lot of the noisy early adopters are start-ups that are very present on the Web, social media, blogs, and stuff like that. Interestingly, the bigger the company the more likely they are to be doing it, despite the hype that the small companies will go first.

... It doesn't necessarily mean that your typical enterprise is doing it, and, if they are, it's probably the developers, and it's probably Web-oriented stuff. ... In the infrastructure layer, it's really workloads like test and development, special computation, and things like that, where people are experimenting with it. But, you have to look at your developers, because often it's not the infrastructure guys who are doing this. It's the developers.

It's the people writing code that say, “It takes too long to get infrastructure guys to set up a server, configure the network, apportion the storage, and all that stuff. I'll just go do it over here at the service provider."

... There is no one thing called "cloud," and therefore, there is no one owner in the enterprise. What we find is that, if you are talking about SaaS, business owners are the ones who are often specing this.

Gardner: Who is the "one throat to choke" if something goes wrong?

Gillett: Bottom line, there isn't one, because there is no one thing. ... They are on their own within the company. They have to manage the service providers, but there is this thing called the network that's between them and the service providers.

It's not going to be as simple as just going to your network provider, the Internet service provider, and saying, "Make sure my network stays up." This is about understanding and thinking about the performance of the network end to end, the public network -- much harder to control than understanding what goes on within the company.

This is where you have to couple looking at your Internet or network service provider with the set of offerings out there for content and application acceleration. What you're really looking for is comprehensive help in understanding how the Internet works, how to deal with limitations of geography and the physics, the speed of light, making sure that you are distributing the applications correctly over the network -- the ones that you control and architect -- and understanding how to work with the network to interact with various cloud-service providers you're using across the network.

... Even though you can't get the uber "one throat the choke," at the network layer you can go for a more comprehensive view of the application, and the performance of the network, which is now becoming a critical part of your business process. You depend on these service providers of various stripes scattered across the Internet.

If you take the notion of service-oriented architecture (SOA), and explode it across the public network, now you need sort of the equivalent of the internal network operation center, but you need help from an outside provider, and there's a spectrum of them obviously to do that.

When you're asking about governance, the governance of the network is really important to get right and to get help with. There is no way for an individual company to try and manage all that themselves, because they are not in the public network themselves.

... I spoke to a luxury goods and perfume maker that had a public website with transactions, as well as content, on their website. I said, "How many servers does it take to run your transactions?" And they said it only takes four, and that includes the two redundant ones. "Oh, really? That's all?" They said, "Well, not really. Three quarters of my workload is with my application and content acceleration provider. They take care of three quarters of my headache. They make it all work." So, that's a great example.

Gardner: What seems to be missing in this notion of trust, governance, and reliability?

Gillett: There's no such thing as "the" cloud provider, or one cloud provider. Part of the complication for IT is, not only do they have multiple parties within the company, which has always been a struggle, as they get into this, they're going to find themselves dealing with multiple providers on the outside.

So, maybe you've got the services still in your IT as an infrastructure. You've got your internal capability. Then, you've got an application, SaaS, and perhaps PaaS, and a business process that somehow stitches all four of those things together. Each one has its own internal complexities and all of it's running over the public network ... So, it's really challenging.

Gardner: How can we integrate across different sets of services from different providers and put them in the context of a business process?

Gillett: If you look at it, a lot of these concepts are embodied in the whole set of ideas around SOA, that everything is manifested as services, and it's all loosely coupled, and they can work together. Well, that works great, as long as you've got good governance over those different services, and you've got the right sort of security on them, the authentication and permissions, and you found the right balance of designing for reuse, versus efficiently getting things done.

... But, as you're hinting, I have to think about how I make that business process work, making sure that I work over the Internet? What do I do if that service provider hiccups or a backhoe cuts a fiber optic cable between me and the service provider?

Now, I'm becoming more dependent on the public Internet infrastructure, once I'm tying into these service providers and tying into multiple parties. Like a lot of things in technology, unless you're going to completely turn over everything to an outside service provider, which sounds like traditional outsourcing to me, the "one throat to choke" is your own. ... If you think about it, it's not that different than when I ran all the infrastructure on my own premises, because I had gear and applications from different parties, and, at the end of day, it was up to me to referee these folks and get them to work together.

... So I look at this, and I say, "Okay, we've got a decade here to sort this out." It's a completely different problem, by the way, to think about how I take the existing applications I run inside my company, and think about migrating them to a service provider.

Gardner: We have a track record of organizations saying, "Listen, I don't want to be in the commodity applications business. I want to specialize in what's going to differentiate me as an enterprise. I don't want to have everyone recreating the same application instance."

Gillett: ... "You said, "Cloud is about commoditizing IT, and only things that aren't differentiating leave my company." Not true. ... Cloud services can handle mission-critical workloads, things that differentiate you. In fact, that might only be possible if you do them in a service provider.

... Let me give you an example. Let's say that your business has critical calculations to run overnight, say, for ad placement on websites. Let's say that that's soaks up huge amounts of computing capacity when you run the workload at night, but sits idle during the day. ... Guess what? That's one of the workloads that runs at Amazon's EC2 IaaS or "computer as a service."

In that case, it's more cost effective and more flexible for them to run it with the service provider, even though it's mission critical. It's a more effective use of resources.

Now, let's flip it around the other way. Take a provider that does streaming of public websites of media. You go to a website of a major newspaper or a television network and you want to see their video. This provider helps with that on the back-end. What they found, when they looked at their internal infrastructure, was that they felt they were cheaper than the Amazon at running their core infrastructure.

Amazon looked like a nice extra capacity on top, so they wouldn't have to buy over provision as much. Amazon also looked like a great way to add capacity into new regions before they got critical mass to do it cost effectively themselves in that region. There are two examples of the non-intuitive ways to think about this.

... It feels like we are in a cloud hype bubble right now. All the hype and noise is sort of on the upswing still, but we are going to see this subside and calm down late this year or next year. This is not to say that the ideas aren't good. It's just that it will take a significant amount of time to sort things out, figure out the right choices for the offerings to mature, for the early adopters to get in, the mainstream folks, and the laggards. It's only as we get deeper into it that we even begin to understand the governance ideas.

... You start thinking about how to distribute application logic, to create fast response, good business service levels and things like that, despite the fact that you think, "We're just selling one thing and all that has to come back to a central database." Not necessarily. So, you really start to think about that. You think about how to prioritize things across the network. This is more important than that. All of it is basically fighting the laws of physics, also trying to figure out the speed of light, and all sorts of computation stuff.

It's also trying to figure out the most cost-effective way to do it. Part of what we're seeing is the development and progression of an industry that's trying to figure out how to most cost-effectively deliver something. Over time we'll see changes in the financial structures of the various service providers, Internet, software or whatever, as they try to find the right way to most cost-efficiently deliver these capabilities.

Gardner: How do enterprises put themselves in position to take advantage of cloud sooner rather than later, and perhaps gain a competitive advantage as a result?

Gillett: ... One of the things that we're telling our infrastructure and operations guys is to get in early ahead of the developers. Don't let them run willy-nilly and pick a bunch of services. Work with the enterprise architect, the IT architect, to identify some services that fit your security and compliance requirements. Then, tell the developers, "Okay. Here is the approved ones that you can go play with, and here's how we're going to integrate them."

So, proactively, get out in front of these people experimenting with their credit cards, even if it's uncomfortable for you. Get in early on the governance. Don't let that one run away from you.
Read a full transcript of the discussion.

Watch the video. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Akamai Technologies.

Virtual desktop services gain TCO boost with HP clients, Desktone cloud-based 'DaaS' advances

HP and Desktone have made strong market moves into the fast-growing desktops as a service (Daas) space, also know as desktop virtualization and virtual desktop infrastructure (VDI).

These new products and services show an aggressive ramp-up and sophistication to deliver a fast-track to DaaS. HP and Desktone, among others, are banking on the down economy to whet the appetites of many kinds of companies -- and myriad uses -- for these far lower cost approaches for delivering full PC functionality without the full PC and local maintenance headaches.

HP last week announced a novel new notebook-like thin client device, an entry-level price on mobile units to $550 (yowza!), better WiFi support, and enhanced VDI client and server software. The 4410T mobile thin client, which looks just like a notebook PC, runs Windows Embedded Standard.

As the global VDI market leader, HP also updated it desktop thin client lineup with the T5540, which starts at $199 and features an easy update and configuration capability, low engery use, choice of hypervisor (VMware or Citrix Xen), and fuller multimedia support. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The delivery of these new client devices comes on the heels of HP's recent advances in the data center software that supports them. That help the emphasis in the latest round of client announcements on faster and more complete start-ups, as well as better remote administration.

Desktone, after teaming with NetApp earlier this month, last week announced two significant milestones. First, Desktone secured registered trademarks for the terms "desktops as a service" and "DaaS." Second, it launched improvements to Virtual-D, its hosted virtual desktop platform.

Hmmm ... Seems to me I dropped that DaaS moniker on the Chelsmford, MA start-up almost two years ago. They seemd to like it then a lot. And now I'm sincerely flattered. Well, there goes some more free consulting -- validated, if not always renumerative.

Of course, if they asked, I would have told them to add the word "cloud," as in cloud-based DaaS should be in there somewhere. It leaves open the public and private cloud delivery of DaaS for either service providers or large enterprises (or a federated approach of some kind). Oh, well, maybe Desktone's customers will use cloud as they brand on top of Desktone's DaaS picks and shovels.

Before we dive into the improvements, let's look at the Desktone's newly minted, official definition of DaaS: DaaS specifically describes an outsourced subscription service for server-hosted desktops powered by the Desktone Virtual-D Platform and delivered by Desktone-certified service provider partners.

More generally speaking, DaaS is a model that in many ways mimics software-as-a-service (SaaS), where software is outsourced to providers that offer applications on-demand as a subscription service. Desktone's DaaS lets PC users tap into virtual desktop computing with a Windows client experience that leverages existing data center infrastructure and network investments. Yep, sounds cloud-like all right.

"There's been a lot of confusion in the marketplace about what DaaS really is," says Jeff Fisher, senior director of strategic development at Desktone. "Other vendors use the term differently, but we've trademarked these terms to describe the virtual desktop hosting infrastructure that allows our partners to deliver DaaS."

Yep, could not have said it better myself, even if I already did. Of course, this probably means that Desktone alone with use DaaS from now on. Will be interestering to see if Citrix (a Desktone investor) and Microsoft (in the ecology, you could say) will begin using "DaaS" in their literature.

So now that we know what Desktone means when it talks about DaaS, let's look at the "new and improved" Virtual-D. There are four new capabilities for service providers, including multi-tenancy, multi-data center and improved hosting economics. But perhaps the most attractive upgrade for its customers is the Virtual-D Service Center.

Virtual-D Service Center is a single management Web console for service operators to create, manage and monitor many customers on common network, storage, and virtualization infrastructure. Fisher says this is what sets the product apart from the competition.

On the enterprise front, Desktone just pushed out rapid service on-boarding, global language support and delegated administration. The goal here is to let enterprise admins upload virtual desktop images to their service provider's infrastructure more quickly – and in as many as 12 languages. Roles and permissions are also more flexible.

Of course, Desktone isn't the only company vying or a slice of the virtual desktop pie. With IDC predicting market for desktop virtualization software will grow to $1.7 billion by 2011, the competition is heating up. The biggest challenge as Fisher sees it, though, isn't competitors. It's transforming the way users technically consume desktops.

"In the DaaS model, people are consuming desktops from a service provider's cloud," Fisher says. "The question arises, 'Does it make sense to purchase desktop hosting services as a subscription and, if so, at what price point?' So there's a lot of TCO work to convince the market to flip to this model."

Desktone, among others, are working to educate the market. In the meantime, the company is continuing to work on its software offerings. Fisher hinted at a new relationship that would bring critical technology to its platform in the third quarter.

Could it be with IBM?

Maybe Desktone should look at what HP is doing to get all that multimedia and the other mission-critical VDI capabilities to the DaaS cloud, eh? When it comes to IBM and HP, I'd say partner with both.

Freelance IT journalist Jennifer LeClaire provided research and editorial assistance on this BriefingsDirect post. She can be reached at www.jenniferleclaire.com.

Tuesday, May 26, 2009

Fax proves a lingering business process communications point in need of automation

This guest post comes courtesy of David A. Kelly at Upside Research, where he’s principle analyst. You can reach him here.

Sometimes, no matter how much some things change, others stay the same. Take the example of the fax machine.

The first fax patent was issued in 1843 to a Scottish inventor, Alexander Bain, who created a line-by-line scanning mechanism based on the idea of a clock’s pendulum. Throughout the early 1900s different forms of fax machines provided ways to transmit and reproduce images and markings from one location to another. But it wasn’t until the 1970s and 1980s that the modern versions of fax machines really took hold of the business world.

And while many companies have since moved on to computers, email, scanning, the Internet, and other new technologies, there are hundreds of thousands (or millions) of fax machines still out there globally -- churning away, sending and receiving a substantial number of important business documents, even in spite of all those e-commerce initiatives.

For example, plenty of organizations still have fax machines handling inbound sales orders or vendor invoices. In many cases, e-commerce initiatives simply don’t apply -- the specific business partners may be too small to warrant conversion or the infrastructure may not support changes. Or perhaps for some situations, the fax machine is still a perfectly acceptable solution. Well, perhaps “handling” is too strong a word -- in many cases, it’s more like receiving or sending and leaving the rest up to manual processes of managing the flow of faxes and information coming from them.

Perhaps now that we’re almost to 2010 and the start of new decade, it’s time for retro-technology. If it’s working for cereals giants like General Mills, it might work for technology.

That’s why we’ve been particularly interested to come across Esker and its DeliveryWare solution. While it’s not exactly bring out a line of cool-looking breakfast cereals, what it’s aiming for may be just as useful to a growing company as vitamin-enriched foods can be to growing kids.

Esker DeliveryWare (get a free Upside Research report) helps automate a wide range of document-based processes such as accounts payable, sales order processing, purchasing and customer invoicing. The product works best in enterprises that have a significant volume of document handling tasks that can benefit from automation. [Find other Upside Research product briefs.]

Esker’s products support enterprises that have Enterprise Resource Management (ERP) systems such as SAP or Oracle E-Business Suite, helping them to eliminate resource-intensive order-to-cash or procure-to-pay processes that involve printing orders or invoices, walking them to another point and re-keying the data.

Despite the automation that enterprise resource planning systems provide, there are still gaps in coverage, especially when the processes go beyond the corporate walls. This is often a pain point for organization, because they must meld manual processes with automation, and the results can be resource-intensive, error-prone, and inefficient.

For example, taking an order from inside an ERP system, printing it out, walking it across to another system, and re-keying the data is not an optimal environment for business operations, and yet it is often the reality in many companies.

Esker’s DeliveryWare solution is part of a category of tools that seek to remedy such manual processes. Document process automation is an important part of increasing efficiency and effectiveness with processes throughout the enterprise. Despite the economic downturn, document process automation provides a bright spot because organizations see immediate cost-savings and bottom-line impact from implementing such a solution.

Esker has the advantage of offering on-premise or on-demand services, enabling an even lower initial investment to start gaining efficiency and reducing costs and errors associated with manual document routing and processing.

Esker has carved out a specific niche in the broader business process management (BPM) and modeling space, and as a result has a very focused, and apparently successful, business model. I have a feeling, with solutions like this, that we may not see the end of the fax machine until the 22nd century.

This guest post comes courtesy of David A. Kelly at Upside Research, where he’s principle analyst. You can reach him here.

Thursday, May 21, 2009

BriefingsDirect analysts take pulse of newest era in IT: Corporate flat line or next Renaissance?

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition podcast, Vol. 41. Our latest discussion centers on the next era of information technology (IT). Suddenly, cloud computing is the dominant buzzword of the day, but the current confluence of trends includes much more.

There's business process management (BPM), business intelligence (BI), complex event processing (CEP), service-oriented architecture (SOA), software as a service (SaaS), Web-oriented architecture (WOA), and even Enterprise 2.0.

How do all of these relate? Or if they don't relate, is there a common theme? Is there an overriding uber direction for IT that we need to consider?

The cloud computing moniker just doesn't include enough and doesn't bring us to the next stage. In the words of Huey Lewis, we need a "new drug."

So join our panel of analysts to help dig into this current and budding new era of IT: Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Brad Shimmin, senior analyst, Current Analysis; Joe McKendrick, independent analyst and ZDNet blogger, and Ron Schmelzer, senior analyst at ZapThink. The chat is moderated by me, as usual.

Here are some excerpts:
Gardner: Are we oversimplifying what's going on in IT by just calling everything new that's going on cloud computing?

Kobielus: Of course ... There is just too much stuff, too much complexity, too many themes, and too many paths for evolution and innovation.

Shimmin: ... The postmodern IT world is perhaps what we're living in. Maybe that's okay, where there is no really overriding sort of thematic vision to IT. ... My attempt with this is to describe just the zeitgeist I've seen over the last year or so, and that is just to call things ... "transparent computing."

IT resources and business solutions are becoming more visible to us. We're able to better measure them. We're able to better assess their cost-to-value ratio. At the same time, the physicality of those resources and the things that we call a business are becoming much more transparent to us and much more ethereal, in terms of being sucked into Amazon EC2, for example. ... Application programming interfaces (APIs) have made things much more transparent than they were.

McKendrick: Perhaps computing has become so ubiquitous to our everyday lives and our everyday work that it no longer needs to carry a name. We don't call this era the "telephone era" or the "television era." For that matter, we don't call it the "space age" anymore. The novelty and the newness of all this is worn off.

Computing is such an everyday thing that folks understand. At the same time, the IT folks are beginning to understand the business a little bit better and we're seeing those two worlds being brought together and blending.

Schmelzer: We could say that we're still floating through the information era, but ... I'm going to bring back the drug theme here. We like to self-medicate in IT. We have these chronic problems that we seem to be continuously trying to solve.

They're the same problems -- getting systems to talk to each other, to extract information, and to make it all work. We try one drug after the other and they provide these short-term fixes. Then, there's the inevitable crash afterward, and we just never seem to solve the underlying problem.

Gardner: Is this really a psychological shift then? Do we need to stop thinking about how technology is shifting and think about how people are shifting? I think that people are acting differently than they used to.

Schmelzer: ... There is a digital divide, and I'm not talking about the parts of the country that have more IT than the other. I'm talking about the experience at home and the experience at work.

When I step into work, I'm turning the clock back 10 years. I have this wonderful, rich IT environment on my own at home and on my phone. Then, I see these enterprise IT systems that had very little in the way of influence from any of these movements from the last 10 years. It's like the enterprise IT environment is starting to stagnate quite a bit from the personal IT environment.

... If we had to do it all over again, would we really be building enterprise IT systems -- or would we be doing it the way Google is doing it? Google would just be laughing at us and saying, "What are you doing putting in these mainframes and these large enterprise applications that take X millions of dollars and multiple years and you only achieve 10 percent of your goals and only use 5 percent of the system you just built? That's just hilarious."

Kobielus: What you're hitting on is that there is this disconnect between what we can get on our own for ourselves and what our employer provisions for us. That causes frustration. That causes us to want to bolt, defect from an employer who doesn't empower us up to the level that we absolutely demand and expect.

Shimmin: This is representative of what I'm seeing in my area of research, which is in collaboration, social computing, that stuff. Most of the vendors have got the traditional, on-premise software, and they're all putting it in the cloud.

They're also saying to me, in their go-to market schemes, "We're trying to take IT out of the picture, at least at the outset." They're seeing IT as a roadblock to getting these technologies into the enterprise. The [business] people in the enterprise realize they want it. The worker bees in IT realize it, but IT's hands are strapped.

Schmelzer: You ask people, "Well, do you want a 42-inch plasma television in your house? Do you want TiVo? Do you want the latest MacBook and the latest iPhone?" Something like 90 percent of the people are going to say, "Yes." They want the GPS. They want all that stuff.

So what is it about enterprise IT? It's not the technology that they're blocking. It's this complexity. And it's not just the complexity. It's this perception that enterprise IT is a nonconstructive hassle. So they look at Google and they think, "Ah, constructive, productive." They look at enterprise IT, and they think, "Barrier, bottleneck."

McKendrick: ... When it first became apparent that GM and Chrysler were on the skids, Andrew McAfee of Harvard posted this proposal to help these companies. If he were given the option to rebuild one of

It's essentially mocking the investments that this company has made. You spent millions of dollars on something that you could have gotten in the cloud for pennies per hour. That's a disruptive force in IT.

these companies from the ground-up, he would go in with a very strong social networking system, enabling the folks that are working on the front lines, assembly, production, sales, marketing, and so forth to communicate with each other real time, on a regular basis, to find out what everybody is doing, and to build the base of knowledge to move the company forward.

Gardner: So, we don't have a generation gap. We have a corporation gap. The corporations have a huge burden of trying to move and do anything, whereas individuals or small companies or people that are aligned by their social networks can move swiftly.

Kobielus: ... Auto companies of necessity are chained to platforms. It's the basic chassis and design and the internal guts in terms of the transmission and engines and so forth for a wide range of models. When they make a commitment to a given platform, they're stuck with it.

Gardner: Well, the same can be said for your enterprise IT department, right?

Kobielus: Yes ... When some cheaper, more lightweight solution, maybe in the cloud, comes along, the users can get it quickly and more cheaply. It's essentially mocking the investments that this company has made. You spent millions of dollars on something that you could have gotten in the cloud for pennies per hour. That's a disruptive force in IT.

Shimmin: IT's challenge is to be able to allow those [changes] to happen and to encourage them to happen without locking them down, controlling them, and destroying their ability to make people in the enterprise more productive and flexible.

Gardner: The technology needs to be there, but perhaps doesn't need to be visible. The transparent notion that Brad has makes sense, or maybe we need to be the "post-IT era." The IT has to be there, but under the covers, convenience and information become essential, along with the ability of people to act on it.

Schmelzer: ... Really what we're doing is empowering individuals within the organization to have greater control over their use and provisioning of IT capabilities.

They're shifting it away from these central oligarchies of enterprise systems that have had way too much control and way too little flexibility. ... Technology is making information accessible to all, for all to leverage.

I like these populist movements in IT. Once again, just remember your IT experience at home and how much you would wish it would be in your work environment.

Gardner: So, perhaps technology, habits, and the cloud are shifting sovereignty away from countries, companies, and even groups based on geography. ... Having power is now shifting down to amorphous groups and even individuals.

... It's interesting. Just as we're embracing cloud, we're also seeing that, if you have a couple of mainframes, you can create a cloud. You could provide services out to a public constituency, or you could take your old mainframe inside the enterprise and put some new hubcaps on it.

Schmelzer: ... That's the irony of it. In order to get reuse, which is what people talk about all the time, you have to have legacy. Just think, if you're never keeping anything around long enough, you're never going to get reuse. But, having legacy doesn't necessarily mean also not spending a lot on new things, which is the weirdness of it. Why is it that we're soaking up so much of the IT budget on legacy, if we're not creating anything new?

There's something malfunctional in the way that we're procuring IT that's preventing us from getting the primary benefit of legacy, which is extracting additional value from an existing investment, so that we can make the old dog get new tricks and get new capabilities provisioned on a cloud, without having to invest a huge amount in infrastructure.

Shimmin: To me, whether it's mainframe or a bunch of PCs on Google's data center doesn't matter. What matters is what it does. If we're able to make our existing mainframes do new tricks, that's really great, because it allows us to make use of investments we've already made.

That's why, when I look at things like SaaS, I see it being more beneficial to the vendors who are providing those services than to the customers using them. Instead of having something they can depreciate over time, they just have to pay it out every month like a telephone bill. You don't ever own your services -- you're just paying for them, like leasing a car versus owning a car.

Baer: Theoretically, if the cloud is done right, and if we use all the right enabling underlying architectures and technologies, we should theoretically be able to get the best of both worlds.

Gardner: I think I've come up with a word for us. If we look at what happened perhaps 500 or 600 years ago, there was a collective word that came to represent it. It was called Renaissance.

Are we perhaps at a point where there is a renaissance from IT? Even though we thought we were enabled or empowered, we really weren't. Even though we thought that centralized and lock-down was best, it wasn't necessarily. But it wasn't until you got the best of all worlds that you were able to create an IT-enabled Renaissance, which of course cut across culture and language, individuals, even the self-perception of individuals and collectively.

Baer: Just as long as we don't have to go through the Black Plague before it.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.