Friday, April 17, 2009

HP teams with Microsoft, VMware to expand appeal of desktop virtualization solutions

As the sour economy pushes more companies into the arms of virtual desktop infrastructure (VDI) for cost cutting, the vendor community is eagerly wiping out obstacles to adoption by broadening the appeal of desktops as a service for more users, more types of applications and media.

This became very clear this week with a flurry of announcements that package the various parts of VDI into bundled solutions, focus on the need to make rich applications and media perform well, and expands the types of endpoints that can be on the receiving end of VDI.

Hewlett-Packard (HP) expanded its thin-client portfolio with new offerings designed to extend virtualization across the enterprise, while providing a more secure and reliable user experience. The solutions bundle software from Microsoft and VMware along with HP's own acceleration and performance software, as well as three thin client hardware options.

I can hardly wait for HP to combine the hardware and software on the server side, too. I have no knowledge that HP is working up VDI appliances that could join the hardware configurations on the client side. But it sure makes a lot of sense.

Seriously, there are few companies in the better position to bring VDI to the globe, given what technologies they gain with Mercury and Opsware, along with internal development ... Oh, and there's EDS to make VDI hosting a service in itself. Look for a grand push from HP into this enterprise productivity solutions area.

Leading the pack in this latest round of VDI enhancements are the three thin clients -- the HP gt7720 Performance Series, and the HP t5730w and t5630w Flexible Series. These offer new rich multimedia deployment and management functionality -- rich Internet applications (RIA), Flash, and streaming media support -- that enhance the Microsoft Windows Embedded Standard. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The Palo Alto, Calif. company also announced several other new features:
The thin clients feature Microsoft Internet Explorer 7, Windows Media Player 11 and the ability to run applications locally, they also include Microsoft Remote Desktop Protocol 6.1, which enables devices to connect and take advantage of the latest security and enterprise management technologies from Windows Server 2008.

RDP Enhancements multimedia and USB redirection enable users to easily run web applications, videos and other files within a virtual desktop environment, while avoiding frame skipping and audio or video synchronization issues. The software downloads the processing directly to the thin client, creating an enhanced multimedia experience while lowering the load on the server, which results in increased server scalability.

This also creates a near-desktop experience for VMware View environments, including support for the latest VMware View Manager 3 broker with no need for additional employee training. Users simply log in on the thin client to take advantage of its multimedia features, such as training videos, and USB device support.

HP and VMware also are working together to enable VMware View Manager’s universal access feature to leverage RGS for remote desktop sessions.

RGS is designed for customers requiring secure, high-performance, collaborative remote desktop access to advanced multimedia streaming and workstation-class applications. The software includes expanded, real-time collaboration features to allow multiple workers from remote locations to see and share content-rich visualizations, including 2-D design, 3-D solid modeling, rendering, simulation, full-motion video, heavy flash animation and intense Web 2.0 pages.

Not surprisingly, a lot of the technology being used in these VDI bundles originated with secure CAD/CAM virtual workstation implementations, where graphics and speed are essential. If it works for developers in high-security areas, it should work for bringing ERP apps and help desk apps to the masses of workers who don't need a full PC on every desktop. They just need an interactive window into the apps and data.

Expected to be available in early May, the new thin clients will be priced from $499 to $799. More information is available through HP or authorized resellers or from http://www.hp.com/go/virtualization. I would expect that EDS is going to have some packages that drive the total cost down even more.

Research and editorial assistance by Carlton Vogt.

Tuesday, April 14, 2009

CollabNet rebrands ALM product to better support distributed development and cloud applications

As companies are being drawn -- or nudged -- into cloud computing, tools are emerging to make distributed services lifecycles more secure and efficient. The latest entry into the field is CollabNet's newly rebranded TeamForge 5.2, which greases the skids for Internet-based software development and deployment.

Formerly known as SourceForge Enterprise, the Brisbane, Calif. company's flagship application lifecycle management (ALM) product now helps developers define and modify profiles and software stacks and provision these profiles on both physical and virtualized build-and-test servers, including from public or private clouds.

Users can access servers from CollabNet's OnDemand Cloud, Amazon's EC2, as well as their own private cloud implementation. TeamForge 5.2 also includes Hudson's continuous integration capability, allowing Hudson users to provision and access build-and-test servers from any of these clouds.

CollabNet also announced Tuesday a relationship with VMWare to help deliver an integrated development environment so independent software vendors (ISVs) and developers can use TeamForge and VMWare Studio to create applications for deployment in internal and external clouds.

CollabNet said it renamed its product to reflect the company's support of modern software development, and elevated Subversion management, across widely distributed project teams. We can expect that the cloud shift will move development to multiple cloud development and deployment environments, and so require heightened management and security capabilities. As platform as a service (PaaS) gains traction, complexity could well skyrocket.

Just as complexity in traditional development projects has benefited from Subversion and ALM, so too will the cloud-impacted aspects of development and deployment. I wonder when the business process management (BLM) functions and ALM functions will intercept, and perhaps integrate. Oh, and how about a feedback loop or two to services governance and a refined requirements update wokflow stream. Now that's a lifecycle.

TeamForge 5.2 also provides role-based access control for distributed teams via increased management visibility, governance, and control of mission-critical software in Subversion repositories. The new release provides granular, path-based permissions for flexible Subversion access control, said CollabNet.

Lastly, the Agile software development method gets a nod with the integration of the Hudson continuous integration engine through a CollabNet plug-in. TeamForge also supports a wide variety of development methods, environments, and technologies.

TeamForge 5.2 is available for download as a free trial at http://www.collab.net/downloadctf.

Research and editorial assistance by Carlton Vogt.

Monday, April 13, 2009

Open Source and Cloud: A Curse or Blessing During Recession? BriefingsDirect Analysts Weigh In.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Sponsor: TIBCO Software.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

The productivity and perils of open source software has been a topic pretty much beaten to death. Yet the landscape in IT, as always, is shifting -- because of the recession and because of a white hot interest in cloud computing.

It's time again, then, to look at the pluses and minuses of open source software models in the context of tight IT budgets and the advent of cloud-based services for enterprises. Our latest BriefingsDirect Analyst Insights roundtable discussion, vol. 39, therefore examines open source in the context of economics, complexity, competition, and the disruption of the shifting business models in software.

The major question is: Does using open-source software pay off in a total cost sense, compared to commercial offerings today? Furthermore, how will this change over the coming several years as cloud models take hold?

Please join noted IT industry analysts and experts Tony Baer, senior analyst at Ovum; Jim Kobielus, senior analyst at Forrester Research; JP Morgenthal, independent analyst and IT consultant, and David A. Kelly, president of Upside Research. Our guests are Paul Fremantle, the chief technology officer at WSO2 and a vice president with the Apache Software Foundation; Miko Matsumura, vice president and deputy CTO at Software AG, and Richard Seibt, the former CEO at SUSE Linux, and founder of the Open Source Business Foundation. Our discussion is hosted and moderated by me.

Here are some excerpts:
Morgenthal: I believe that open source and noncommercial licensing is a good thing and has been very positive for the industry as a whole. My concern is for the proliferation of free software, that is, the commercial software that businesses use without paying any license and, optionally, only have to pay maintenance for to run their business. They earn their profit using that software to run their business, and yet nothing is given back to the software industry.

In my opinion, it's like a flower that's not getting fed through its roots, and eventually that flower will wither and die. To me, it’s almost parasitic, in that there are good parasites and bad parasites. Right now, it's proving itself to be a little bit on the good parasite side, but with a slight permutation, this thing can turn around and kill the host.

... Anytime you have a model where something is given away for free, at some point the free stops. It's very difficult to monetize going forth, because every buy is a buyer's remorse. "I could have had that for free."

... Economically long-term, I don't believe anybody has thought about where these changes stop and what they end up cannibalizing. Maybe we end up with a great market, and maybe we don't. I'd just love to see some attention paid to detail before people just willy-nilly go do these things. What is the long-term impact here?

Kobielus: There's a broader range of options for the buyer in terms of how they can acquire this functionality through open-source or commercial licenses, appliances, cloud, and so forth. ... Open source has been a good parasite. ... Innovation is going like gangbusters, but the business model of being a pure software vendor based on pure commercial licensing is dying out.

Matsumura: Complexity is a really powerful force in the economy and in enterprise software in general. One of the things that open source is doing is helping to simplify some of the infrastructural components and to decrease the overall condition of heterogeneity. ... We have learned that in the business of service-oriented architecture (SOA) and business process management (BPM) -- which are called middleware businesses -- is that chaos is perpetual, in the sense that there are two major driving forces in the economy: competition and consolidation.

Sure, there is commoditization in the IT platform, which is advanced by open source. Contrary to what JP was saying, one of the great things about open source is that it forces IT organizations like Software AG to selectively pick where they make their investment. They will put their investments in at the leading edge of complexity, as opposed to where things have slowed down and are not changing quite as fast.

Open source for quality innovation

Fremantle: There's a change in the marketplace. ... What I see is what you might call "managed commoditization." In a way we've had commoditization of all sorts of things. No one pays money for the TCP/IP stack. That's a piece of open-source software that has now become ubiquitous. It's not of interest to anyone. It's just a commodity that's free. ... I don't think we need innovation in that space. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

If you do something interesting and innovative, whether you are open source or not, if you partner with your customers and really add value, then they will pay you, whether or not your license forces them. The license is a blunt instrument. ... To me, that's something that was abused by software companies for many years.

What open source is doing is sorting the wheat from the chaff. It's sorting out, "Is this something that is a commodity that I don't want to pay for, or is this something that has real value and is innovative, and that I need the support, the subscription, and the help of this company to help me implement?"

Seibt: It's absolutely true that open-source companies are very innovative. If you look at SaaS or even cloud computing, there are many startups that probably lead the way. For open source, we look at that market from a customer perspective. They use the software because of its innovation, its quality, and its cost, and they wouldn't use it for any other reason. It is the innovation, quality, and cost.

Open source is moving up the stack and has reached the SOA level. Large corporations are using open-source SOA frameworks, because they want to be fully independent from any vendor. They trust themselves to develop this piece of software together with the bigger community, which becomes a community of enterprises. Innovation is not only from startups, but it's from large corporations, as well.

Cloud masks use of open source

Kelly: I'm not sure that cloud computing necessarily opens up the field for open-source computing. To some extent, it almost shuts it down, because it then becomes cloud as a series of application programming interfaces (APIs) or a series of standardized connections or services out there that could be supported by anything. Open source is one solution. The one that's going to win is going to be the most efficient one, rather than the lowest cost one, which may or may not be open source.

As you look at cloud computing, some of the initiative that we saw with original open-source roll outs over the past 10 years has been almost mitigated. ... My question really is how far can the open-source innovation go. As organizations move into business processes and business-driven value, all the executives that I talk to don't want to focus on the lower-level infrastructure. They want to focus on what value this software is giving in terms of supporting my business processes. ... They don't want to be in the software-development business.

How far can open source go up that stack to the business process to support custom applications, or is it always going to be this kind of really lower-level infrastructure component? That's the question that I think about.

Morgenthal: The cloud actually hides a whole other layer of the "what and the how" from the user and the consumer, which could work in favor of open source or it could work against open source. ... As long as that thing works, it's reliable, and can be proven reliable, it can be put together with chewing gum and toothpicks and no one would know the difference.

Gardner: JP brings up an interesting issue. It's about risk. If I go down a fully open-source path as an enterprise or as a service provider, is that going to lead me into a high-risk situation, where I can't get support and innovation? Is it less risky to go in a commercial direction? Perhaps, the best alternative is a hedged approach, where there is a hybrid, where I go commercial with some products and I go open source with others, and I have more choice over time.

Matsumura: We're already beginning to hybridize. Even with customers who are acquiring our technology, our technology takes advantage of a lot of open-source technologies, and we have built components. As I said, we're very selective about how we choose to make our investments.

We're investing in areas that obviously are not as commoditized, just because a rolling stone doesn't gather any moss. The big sections of the market, where things have cooled off a lot, where open source can kind of create pavement, is somewhat irreversible.

Our customers need to be able to successfully compete in the market, not just on the basis of lowering the cost of operations through free stuff, but really to be able to differentiate themselves and pull away from the pack. There is always going to be a leading edge of competitive capability through technology. Companies that don't invest in that are going to be left behind in an uptick.

Collaboration between provider and user

Fremantle: Most open-source software is not free. If you want the same things that you get from a proprietary vendor -- which is support, bug fixes, patches, service packs, those kind of things -- then you pay for them, just as you do with a proprietary vendor. The difference is in the partnerships that you have with that company.

What a lot of this has missed is the partnership you have in an open-source project is not just about code. It's about the roadmap. It's about sharing user stories more openly. It's about sharing the development plan more openly. It's a whole ecosystem of partnership, which is very different from that which you have with a standard commercial vendor.

There is an opportunity here to build frameworks that really scale out. For example, you may have an internal cloud based on Eucalyptus and an external cloud based on Amazon. You can scale seamlessly between those two, and you can scale up within your internal cloud till you hit that point. Open-source software offers a more flexible approach to that.

Kobielus: In many ways, the cloud community, as it grows and establishes itself as a viable business model, will increasingly be funding and subsidizing various open-source efforts that we probably haven't even put on the drawing board yet. That will be in a lot of areas, such as possibly an open-source distribution of a shared-nothing, massively parallel processing, data warehousing platform for example. Things like that are absolutely critical for the ongoing development of a scale for cloud architecture.

If there is going to be a truly universal cloud, there is going to have to be a truly universal open-source scale-out of software.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Sponsor: TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Wednesday, April 8, 2009

Google Apps charges ahead with improved data security and long-awaited Java support

Cast Iron Systems and Google have teamed up to overcome one of the biggest hurdles to cloud computing and software as a service (SaaS) in the enterprise -- concerns over data security.

Cast Iron for Google Apps, which was announced today, includes the Google Secure Data Connection, enabling the encrypted exchange of data between a company's enterprise applications and Google's cloud offerings. This makes it easier for companies to integrate their Google Apps and Google App Engine applications with on-premises and cloud apps.

Cast Iron, Mountain View, Calif., is a SaaS and cloud applications provider, and offers pre-configured connectivity with hundreds of other applications, as well as a library of integration templates with pre-configured gadget data maps. Cast Iron for Google Apps offers a portfolio of deployment options, including integration-as-a-service through Cast Iron Cloud, and on-premise physical and virtual appliances.

In a recent survey, IT executives displayed considerable hesitancy in switching to cloud-based applications. A main reason for holding back, cited by many of these executives, was the concern over data security.

Not everyone is squeamish about using cloud apps. Schumacher Group, a $250-million U.S. emergency medicine practice management firm, has created a web portal for its medical providers using a set of custom gadgets and a Google site. The company manages 2,500 physicians who care for 2.5 million patients each year in over 150 emergency rooms across 20 states.

Cast Iron for Google Apps helps enable the extraction and secure exchange of data from Schumacher Group’s MS SQL Server data warehouse to Google Enterprise Gadgets in real-time. Providers and doctors in the Schumacher network now have more secure visibility into emergency room data from anyplace, anytime.

In other Google Apps news, the long-awaited Java support for App Engine has been announced, and the first 10,000 developers to sign up will be given a first look and a chance to comment.

With the new support, developers can build web applications using standard Java technologies and run them on Google's scalable infrastructure. The Java environment provides a Java 6 JVM, a Java Servlets interface, and support for standard interfaces to the App Engine scalable datastore and services, such as JDO, JPA, JavaMail, and JCache.

Also included is a secure sandbox, which will allow developers to run code safely on Google servers, while being flexible enough to allow them to break abstractions at will. More information is available at http://code.google.com/appengine/docs/java/overview.html.

These two developments continue the march toward enterprise-ready cloud activities. Can we still really call cloud just a fad or hype?

HCM SaaS provider Workday's advanced architecture brings cloud business agility benefits to enterprises now

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Workday.

Special offer: Download a new white paper on Workday's latest update to System 7.

Read a full transcript of the discussion.

The paybacks from designing a strong IT architecture can now be enjoyed by more than the enterprises that build them. Increasingly, enterprises are reaping the fruits of modern IT architectures that their software-as-a-service (SaaS) and cloud providers have developed.

Think of it as a multiplier effect of IT modernization -- everyone using the services benefits. In essence, by building good applications and infrastructure, SaaS providers are providing more than standalone applications and services -- they are delivering business agility through integration as a service but without having to upgrade your data center.

By using a unified SaaS service as a way to achieve integration across many far-flung services and processes, the so-called "cloud of clouds" principle can be achieved early. This drives down complexity and cost. It allows enterprises to exploit cloud productivity benefits without building a cloud, and to integrate via cloud advancements without mastering cloud-level integrations.

To learn more about IT architectural best practices at a SaaS provider can achieve these added benefits for the users now -- at greatly reduced total costs and little or no capital outlays -- I recently examined the experiences and approaches of Workday, a human capital management (HCM) SaaS provider. Listen as Stan Swete, CTO of Workday, explains how advanced SaaS providers be an effective core to reach and obtain cloud computing business benefits early and meaningfully.

Here are some excerpts:
It's our belief that enterprise applications have driven a lot of success and a lot of value in enterprises, but that success and value has come at a very, very high cost. Essentially the systems come down to being very hard to use, hard to change, and hard to integrate.

At Workday ... we started our company with a lot of background in what had gone before in terms of architectures to support enterprise resource planning (ERP). ... [We knew] what worked and what didn't work so well with previous client-server architectures.

From the beginning, we thought about a system that would be able to deal natively with producing Web services to get data out of and back into the application and would treat the conversation with other systems as a first-class conversation, just like the conversation with individual users.

In IT today, people are in a difficult spot. They have complex environments. The complexity has grown for a variety of reasons. Everyone sees the opportunity to modernize and to improve efficiencies, but how do you do that in the midst of a complex environment that is constraining just how aggressive you can be?

If you have a SaaS provider like Workday, or someone who's able to take a clean approach, ... instead of having to deal with the complexity of managing all the multiple instances and different architectures you might have, you can use the unified SaaS service as a way to achieve some integration and cut costs. ... Today, it's all about cost.

We have the religion of service-oriented architecture (SOA), and firmly believe that the right way for us to tie into other systems in the cloud and other systems on-premise of our customers is via SOA and an embrace of Web services. We embrace that and we think to some extent that it can accelerate SOA adoption within enterprises. They all see the appeal of newer SOA architectures ... [but] they have the whole other set of architectures that they've got to be concerned about maintaining.

We think the rigidity in these architectures comes from the fact that you've got a complex logic layer. ... Millions of lines of code, in most cases, are backing the logic layer of enterprise systems. That layer has a complex conversation with the relational database, which also has its own complex structure -- typically thousands of relational tables to model all of the data.

We decided to take an entirely new approach in this area and embrace an approach that leveraged the concept of encapsulating data with some of the logic into an object. ... At Workday, the primary logic server is what we call our Object Management Server. It's a transaction processing system, but it's entirely based on an object graph, and that is just a class structure that represents not only the application and its data, but also the methods that process on that data.

The important difference is that we have that layer and we don't have a correspondingly complex and changing data layer. We have a persistent data store that is a simplified version of a relational database that can persist changes that happen from the object layer. ... It's an unchanging relational schema that can persist, even as we make changes up in the object layer.

[Furthermore] we have some of the transformational and delivery options in multiple formats available to us in our data center, so that the Workday applications can generate Web services. Beyond that, we can transform those Web services into other data formats that might be more meaningful to legacy applications or the other applications we need to tie to. We did a lot of work in that area and came up with the need to embrace Web services and embed in an enterprise service bus (ESB).

When you combine the architecture we talked about with the SaaS delivery model ... There are definitely benefits for the customers that we're serving and, frankly, we think that in the approach there are tons of benefits for us, as a vendor, to take cost out of what we're doing and pass those savings on to our customers.

... If you combine that architecture with a cloud-based approach or delivery of SaaS, you get what we at Workday call "hosted integration" or "integration on demand." ... We take the ESB and package up integration so that it can be reused across a wide set of customers.

Built-in business intelligence, as we call it, is also absolutely an advantage of our offering. ... Having an object model that allows us to link more data attributes together than a classical relational database to establish relationship is a lot lighter weight than having to build the foreign key into another table. We're able to cross-link a lot of information that we're tracking inside the object model that we have, and so we're able to offer unusually rich reporting to the customers.

Our transactional application is facilitating multi-dimensional analysis without the need to have to take the data, off load it into an OLAP cube, and then, by a third-party tool, query that cube. ... [This] information can be more interesting to the people who are not just back-office human resources professionals, but maybe managers who wanted to get information about their workforce. That is all built into the application, and that's the level of increased business intelligence we're delivering today.

There is just a large world of opportunity to expand into. ... We're growing to provide business intelligence without the need to buy third-party tools to do it.

[Additionally] you're going to have people who want to use your application without getting into the pages that your application actually renders. Mobile is a great example of that. We absolutely see widening out access to Workday on the mobile devices.

We've been very quickly able to extend the business-process framework that we have ... so that approvals that are done within that framework can now be completely processed on a mobile device. We’ve picked the iPhone as the first starting point and we'll be expanding out to other devices. ... There is a lot of information that is currently presented well within Workday, but it could be presented just as well within a gadget and someone else's portal.

We're able to mark-up a subset of our data and have that appear in a native client on the iPhone that you can get on the App Store, just like you get any other iPhone application. Then, with security, you're just utilizing a native app, which is acting on Workday data. We use that for manager approvals, the management of to-do lists, and for enterprise search of the workforce. That's been a successful example of leveraging this modern architecture. We didn't have to go in and rewrite our applications.

[There are] new options for enterprises to look at in terms of offloading some of the applications that they're trying to support in their existing environment. It's a vehicle for consolidating some of the complexity that you have into a single instance that can be managed globally if you have architected globally, as Workday has done.

We talked about a lot of the value of leveraging new technology to deliver enterprise applications in a new way and then combining that with doing it from the cloud. That combination is going to profoundly change things going forward.

If you think about the combination of modern architectures and cloud-based modern architectures, what will happen when two vendors that have taken that similar approach start to partner in terms of integrated business processing is that the bar will get raised significantly for how tight that integration can become, how well supported it can be, and how it can functionally grow itself forward, without causing high cost and complexity to the consuming enterprise that's using both sides.

As I look in the future, I think enterprises will see an ecosystem of their major application providers be cloud-based and be more cohesive than a like group of on-premise vendors. Instead of having a collection of different architectures and different vendors all in their data center, what they will see is an integrated service from the set of providers that are integrating with Web services in the cloud.

It allows for a lot more integrated processes.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Workday.

Special offer: Download a new white paper on Workday's latest update to System 7.

Thursday, April 2, 2009

Amazon's BI-on-the-fly using MapReduce-as-a-service brings huge cloud data crunching to the masses

Amazon's announcement of a cloud-based data mining and analysis service, using the Hadoop implementation of MapReduce, potentially opens advanced business intelligence (BI) activities to many more businesses and organizations. It's an excellent example of just how much cloud computing can change the world.

In essence, the service, Amazon Elastic MapReduce, if it works as advertised, abstracts the complexity and cost of massive parallel and symmetrical programming and processing so non-computer scientists -- you know, business types -- can examine and query huge data sets.

Think of it as having your own tuned supercomputer that you can plug gigantic data sets into and ask questions that will determine the course of your businesses for the next decade. Oh, and you can pay for the pleasure on a credit card.

This high-end BI value has pretty much been the sole purview of large, skilled and deep-pocketed enterprises. But there are plenty of people, researchers, government agencies, academics, small to medium enterprises, venture capitalists and the like that would hugely benefit from sussing out important trends and findings from the growing reams of raw data generated by modern businesses and societies. Talk about metadata on steroids! Here's another way to use social networks, folks.

For more on the business implications of MapReduce and advanced BI, take a look at a podcast I recently moderated. For more on the more technical aspects of what MapReduce-oriented computing means, there's a second podcast discussion.

Given the intriguing price points Amazon is providing, this service could be a game-changer. It will likely force other cloud providers to follow suit, which will make advanced BI services more available and affordable for more kinds of tasks. I can even imagine communities of similarly interested user parties sharing query formulations and search templates of myriad investigations. A whole third-party BI consulting and services industry could crop up virtually overnight.

It will interesting to see if Business Intelligence 2.0 types of analysis can also be brought to the service, through third parties or even outright products that leverage the cloud BI services in the background.

Their pitch: We can bring what Google does for the Web to your entire universe of data. For any of your users. Oh, and we can bring other useful and available data sets into the mix, too. And you can afford this. Your executives can figure out how to use it directly. No lab coats required.

Governments and legislators in particular -- which have access to huge stores of publicly financed data -- could significantly drop the cost of providing data and analysis services to the masses. As I understand it, the federal and state governments are a bit better at creating data than leveraging it in near real time. As in, the once a decade census data takes almost 10 years to get published. This could help that a lot.

Part of the challenge will be getting to the data and making the largest -- sometimes in the petabyte scale -- sets available to a service like Amazon's. The garbage-in, garbage-out parable does not change. And moving and managing these large sets is not trivial.

What's more trust remains a hurdle. For sensitive data, the handling and security of the bits need to be managed. But if a sales force trusts it's daily grind to Salesforce.com, perhaps other sensitive data too has a place on someone else's cloud fabric.

For those that can get access to good data on matters of importance to them, and perhaps do unique joins against other data sets, this cloud--based BI development could be a boon. Things that were never possible at any price are now doable.

With Amazon's move, the important BI tasks moves up away from cost-inhibitors and the infrastructure access pain to the data access, quality and query development skills levels, where it belongs.

Particularly in this economy, taking the risk out of weighty business and market decisions -- at an affordable cost on someone else's cloud fabric -- is a no brainer.

Sunday, March 29, 2009

HP advises strategic view of virtualization to dramatically cut IT costs, gain efficiency and usher in cloud benefits

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion. Access more HP resources on virtualization.

Virtualization has become imperative to enterprises and service providers as they seek to better manage IT resources, cut total costs, reduce energy use, and improve data center agility.

But virtualization is more than just installing hypervisors. The effects and impacts of virtualization cut across many aspects of IT operations. The complexity of managing virtualization IT runtime environments can easily slip out of control.

A comprehensive level of planning and management, however, can assure a substantive economic return on virtualization investments. The proper goal then is to do virtualization right -- to be able to scale the use of virtualization in terms of numbers of instances elastically while automating management and reducing risks.

To gain the full economic benefits, IT managers also must extend virtualization from hardware to infrastructure, data, and application support -- all with security, control, visibility, and compliance baked in.

What's more, implementing virtualization at the strategic level with best practices ushers in the ability to leverage service oriented architecture (SOA), enjoy data center consolidation, and explore cloud computing benefits.

To learn more about how virtualization can be adopted rapidly with low risk using sufficient governance, I recently interviewed Bob Meyer, the worldwide virtualization lead in HPs' Technology Solutions Group.

Here are some excerpts:
For the last couple of years, people have realized the value of virtualization in terms of how it can help consolidate servers, or how it can help do such things as backup and recovery faster. But, now with the economy taking a turn for the worse, anyone who was on the fence, who wasn’t sure, who didn’t have a lot of experience with it, is now rushing headlong into virtualization.

They realize that it touches so many areas of their IT budget, it just seems to be a logical thing to do in order for them to survive these economic times and come out a leaner, more efficient IT organization. ... It’s gone to virtualization everywhere, for everything -- "How much can I put in and how fast can I put it in." ... Everybody will have a mix of virtual and physical environments.

We're not just talking about virtualization of servers. We're talking about virtualizing your infrastructure -- servers, storage, network, and even clients on the desktop. People talk about going headlong into virtualization. It has the potential to change everything within IT and the way IT provides services.

Throughout the data center, virtualization is one of those key technologies that help you get to that next generation of the consolidated data center. If you just look at from a consolidation standpoint, a couple of years ago, people were happy to be consolidating five servers into one or six servers into one. When you get this right, do it on the right hardware with the right services setup, 32 to 1 is not uncommon -- a 32-to-1 consolidation rate.

Yet the business can be affected negatively, if the virtualized infrastructure is managed incompletely or managed outside the norms that you have set up for best practices. One of the blessings of virtualization is its speed. That’s also a curse in this case, because in traditional IT environments, you set up things like a change advisory board and, if you did a change to a server, if you moved it, if you had to move to a new network segment, or if you had to change storage, you would put it through a change advisory board. There were procedures and processes that people followed and received approvals.

In virtualization, because it’s so easy to move things around and it can be done so quickly, the tendency is for people to say, "Okay, I'm going to ignore that best practice, that governance, and I am going to just do what I do best, which is move the server around quickly and move the storage around." That’s starting to cause all sorts of IT issues.

Initial virtualization projects probably get handled with improper procedures. ... Just putting a hypervisor on a machine doesn’t necessarily get you virtualization returns.

You have to start asking, "Do I have the right solutions in place from an infrastructure perspective, from a management perspective, and from a process perspective to accommodate both environments?"

The danger is having parallel management structures within IT [with a separate one for virtualized resources]. It does no one any good. If you look at it as a means to an end, which virtualization is, the end of all this is more agile and cost-effective services and more agile and cost-effective use of infrastructure.

Virtualization really does touch everything that you do, and that everything is not just from a hardware perspective. It not only touches the server itself or the links between the server, the storage, and the network, but it also touches the management infrastructure and the client infrastructure.

What we intend to do is take that hypervisor and make sure that it's part of a well-managed infrastructure, a well-managed service, well-managed desktops, and bringing virtualization into the IT ecosystem, making it part of your day-to-day management fabric.

The focus right now is, "How does it save me money?" But, the longer-term benefit, the added benefit, is that, at some point the economy will turn better, as it always does. That will allow you to expand your services and really look at some of the newer ways to offer services. We mentioned cloud computing before. It will be about coming out of this downturn more agile, more adaptable, and more optimized.

No matter where your services are going -- whether you're going to look at cloud computing or enacting SOA now or in the near future -- virtualization has that longer term benefit of saying, "It helps me now, but it really sets me up for success later."

We fundamentally believe, and CIOs have told us a number of times that virtualization will set them up for long-term success. They believe it’s one of those fundamental technologies that will separate their company as winners going into any economic upturn.
Read a full transcript of the discussion. Access more HP resources on virtualization.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Wednesday, March 25, 2009

Eclipse Swordfish OSGi ESB enters fray for SOA market acceptance, Sopera to add support

The Eclipse Foundation's news that the first release of Swordfish enterprise service bus (ESB) in early April hasn't exactly set the blogosphere on fire. Reaction to the open-source ESB so far has ranged from ho-hum to mild skepticism.

There are, after all, several open source ESBs in play, from Mule to Apache ServiceMix and Synapse to PEtALS.

On the other hand, it could just be that the rest of the bloggers are working finding just the right fishing metaphor to use for new ESBs, something that seems to be a requirement when writing about Swordfish.

How about, "there's a deep and wide ocean of opportunity for an open source ESBs, and an ability to federate them might provide yet more fish to fry." Sorry.

Eclipse made the announcement Monday at Eclipsecon 2009. Swordfish, which is described as a next-generation ESB, aims at providing the flexibility and extensibility for deploying a service-oriented architecture (SOA) strategy. Based on the OSGi standard, the new ESB builds upon such successful open-source projects as Eclipse Equinox and Apache ServiceMix.

Among the features highlighted in Swordfish are:
  • Support for distributed deployment, which results in more scalable and reliable application deployments by removing a central coordinating server.

  • A runtime service registry that allows services to be loosely coupled, making it easier to change and update different parts of a deployed application. The registry uses policies to match service consumers and service providers based on their capabilities and requirements.

  • An extensible monitoring framework to manage events that allow for detailed tracking of how messages are processed. These events can be stored for trend analysis and reporting, or integrated into a complex event processing (CEP) system.

  • A remote configuration agent that makes it possible to configure a large number of distributed servers from a central configuration repository without the need to touch individual installed instances.
Austin Modine at The Register sees the move putting Eclipse up against some software powerhouses and is taking a wait and see attitude:
Eclipse's jump into runtime puts the foundation into more direct competition with companies like Oracle, IBM and Microsoft, as well as a multitude of smaller providers. Eclipse already shook up the development tools market by offering a free and open source toolset — can Eclipse pull off the same with SOA?
Steve Craggs at Lustratus Research takes a glummer view:
So, will Swordfish make a successful strike at the ESB market? So far, open source ESB projects have not had a great deal of success, and as far as 2009 goes Lustratus has forecast that open source projects will suffer due to the lack of the necessary people resources to turn open source frameworks into a useful user implementation. However, Swordfish has the backing of the influential Eclipse organization, which has done a lot to standardize the look and feel of many software infrastructure tools.

Looking at the initial bites on Swordfish, the market needs to be baited a bit.

And, of course there's more to market acceptance than just the code drop. Also this week, German start-up and Deutsche Post AG spin-off Sopera GmbH announced plans to support Swordfish as part of a comprehensive SOA platform.

Sopera helped develop and refine Swordfish at Deutsche Post before helping to bring the project to fruition in Eclipse.

Using the Eclipse Swordfish (SOA Runtime Framework) and the SOA Tooling Platform (STP), Sopera now plans to further deliver a new service registry/repository, integrate process orchestration engines, and provide integration between the OSGi components -- all to create the SOA solution.

As I said to Ricco Deutscher, Sopera's CTO, managing director and co-founder, when briefed: "In today's economic climate, there is definite opportunity for open source SOA. Plus, we see emerging requirements for modern middleware that includes SOA, and helps prepare for cloud-based applications."

There should be a signifiant degree of pull for strong SOA offerings built of open source components, but with value-add of integration and associated support services. The market for the de facto on-premises cloud architecture and implementation is wide open. There's no reason that open source SOA implentations won't be a major portion of quite a few clouds.

Low-cost open spurce solutions -- coupled with the proper balance of completeness and flexibility -- may gain a surer foothold now, given the economy, than in the past. Deutscher says Sopera is seeking to attain and deliver on the right balance at he right price.

The ambition is certainly there. Last month, Sopera joined forces with Microsoft and Open-Xchange under the Open Source Business Foundation (OSBF), a non-profit European open source business network, to announce a platform that leverages SOA for cloud computing.

This "Internet Service Bus (ISB)" will create a bridge between Java and .NET software applications and promote seamless interoperability. I'm all for that, long as it's a fully bi-directional bridge.

The first release of Swordfish 0.8 will be available for download the first week of April from www.eclipse.org/swordfish/. Sopera will be delivering solutions around it and then added SOA and cloud solutions over the next two years.