Friday, June 12, 2009

Cloud grows globally: Russia, South Korea, and Malaysia join Open Cirrus Initiative

More evidence has emerged that cloud research and development is a growing worldwide phenomenon.

The Open Cirrus initiative spread across more borders this week with the addition of the Russian Academy of Sciences, South Korea’s Electronics and Telecommunications Research Institute, and the Ministry of Science, Technology and Innovation in Malaysia (MIMOS).

A global, multiple data center, open-source test bed for the advancement of cloud computing research, Open Cirrus was started last summer by HP, Intel Corp. and Yahoo! Inc. The goal is to “promote open collaboration among industry, academia and governments by removing the financial and logistical barriers to research in data-intensive, Internet-scale computing,” the founders say.

Prior to announcing the three newest members at this week’s Open Cirrus Summit in Palo Alto, Calif., the founders had already attracted researchers from the University of Illinois at Urbana Champaign, the Karlsruhe Institute of Technology, Germany, and the Infocomm Development Authority, Singapore.

Noting that IDC predicts that cloud computing will become a $42 billion market by 2012, rival IBM announced its own Blue Cloud initiative earlier this year and in April opened its first cloud computing laboratory in Hong Kong. IBM is also ramping up its PaaS offerings.

HP also has developed Cloud Assure to help make any moves to cloud models mission critical in nature. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Not to be left out, Oracle Corp. is refining its grid middleware into cloud software products and is partnering with Amazon Web Services, one of the early cloud pioneers.

And, of course, the other 900-pound gorilla, Microsoft, has its Azure initiative although it is not entirely clear what shape that cloud will take.

So as vaudeville comic Jimmy Durante used to say when the stage got crowded: “Everybody wants to get into the act.”

The HP, Intel, Yahoo! initiative is impressive not only for the membership it is attracting but for the seriousness and scope of the Open Cirrus approach.

With a growing membership list, the Open Cirrus community offers researchers worldwide “access to new approaches and skill sets that will enable them to more quickly realize the full potential of cloud computing,” according to this week’s announcement. The new members plan to host additional test bed research sites, expanding Open Cirrus to nine locations, “creating the most geographically diverse cloud computing test bed currently available to researchers.”

This expands the cloud test bed to an “unprecedented scale,” according to Prith Banerjee, senior vice president of research at HP and director of HP Labs. He sees the Open Cirrus collaboration with academia, government and industry as “vital in charting the course for the future of cloud computing in which everything will be delivered as a service.”

The new members bring impressive resources to Open Cirrus.

The Russian Academy of Sciences, the first Eastern European institution to join Open Cirrus, provides R&D from three of its own organizations:
  • Institute for System Programming (ISP), which will conduct fundamental scientific research and applications in the field of system programming.

  • Joint SuperComputer Center (JSCC), which will engage in the processing of large arrays of biological data, nanotechnology, 3D modeling and other applications, and port them to cloud infrastructure.

  • Russian Research Center Kurchatov Institute, which will explore how cloud computing is different from other technologies, and apply its techniques for large-scale data processing.
South Korea’s Electronics and Telecommunications Research Institute plans to conduct research and development on the management architecture and content retrieval of massive data sets.

MIMOS in Malaysia plans to develop a national cloud computing platform to deploy services throughout Malaysia, focusing on enabling services through software, security frameworks and mobile interactivity, as well as testing new cloud tools and methodologies.

Andrew Chien, vice president and director of Intel Research sees these added resources and projects creating a critical mass for “our vision of an open-source cloud stack as a strong, large-scale platform for research and development.”

BriefingsDirect contributor Rich Seeley provided research and editorial assistance to BriefingsDirect on this post. He can be reached at RichSeeley@aol.com.

Thursday, June 11, 2009

Analysts define growing requirements for how governance needs to support corporate adoption of cloud computing

Read a full transcript of the discussion. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 42. Our latest topic centers on governance as a requirement and an enabler for cloud computing.

Our panel of IT analysts discusses the emerging requirements for a new and larger definition of governance. It's more than IT governance, or service-oriented architecture (SOA) governance. The goal is really more about extended enterprise processes, resource consumption, and resource-allocation governance.

In other words, "total services governance." Any meaningful move to cloud-computing adoption, certainly that which aligns and coexists with existing enterprise IT, will need to have such total governance in place. Already, we see a lot of evidence that the IT vendor community and the cloud providers themselves recognize the need for this pending market need and requirement for additional governance.

So listen then as we go round-robin with our IT analyst panelists on their top five reasons why service governance is critical and mandatory for enterprises to properly and safely modernize and prosper vis-à-vis cloud computing: David A. Kelly, president of Upside Research; Joe McKendrick, independent analyst and ZDNet blogger, and Ron Schmelzer, senior analyst at ZapThink. Our discussion is hosted and moderated by me, BriefingsDirect's Dana Gardner.

Here are some excerpts ...
Schmelzer's top four governance rationales:

At ZapThink we just did a survey of the various topics that people are interested in for education, training, and stuff like that. The number one thing that people came back with was governance.
  • Control. So first reason to use governance, to prevent chaos. ... You want the benefit of loose coupling. That is, you want the benefit of being able to take any service and compose it with any other service without necessarily having to get the service provider involved. ... But the problem is how to prevent people from combining these services in ways that provide unpredictable or undesirable results. A lot of the efforts in governance from the runtime prevents that unpredictability.
  • Design Parameters. Two, then there is the design-time thing. How do you make sure services are provided in a reliable predictable way? People want to create services. Just because you can build a service doesn't mean that your service looks like somebody else's service. How do you prevent issues of incompatibility? How do you prevent issues of different levels of compliance?
  • Policy Adherence. Of course, the third one is around policy. How do you make sure that the various services comply with the various corporate policies, runtime policies, IT policies, whatever those policies are?
  • Reliability. To add a fourth, people are starting to think more and more about governance, because we see the penalty for what happens when IT fails. People don't want to be consuming stuff from the cloud or putting stuff into a cloud and risking the fact that the cloud may not be available or the service of the cloud may not be available. They need to have contingency plans, but IT contingency plans are a form of governance.
Kelly's top five governance rationales:

At one level, what we're going to see in cloud computing and governance is a pretty straightforward extension of what you've seen in terms of SOA governance and the bottom-up from the services governance area. As you said, it gets interesting when you start to up-level it from individual services into the business processes and start talking about how those are going to be deployed in the cloud.
  • Focus on Business Goals. My first point is one of the key areas where governance is critical for the cloud, and that is ensuring that you're connecting the business goals with those cloud services. As services move out to the cloud, there's a larger perspective and with it the potential for greater disruption.
  • Ensuring Compliance. [Governance] is going to be the initial driver that you're going to see in the cloud in terms of compliance for data security, privacy, and those types of things. Can the consumers trust the services that they're interacting with, and can the providers provide some kind of assurance in terms of governance for the data, the processes, and an overall compliance of the services they're delivering?
  • Consistent Change Management. With cloud, you have a very different environment than most IT organizations are used to. You've got a completely different set of change-management issues, although they are consistent to some extent with what we've seen in SOA. You need to both maintain the services, and make sure they don't cause problems when you're doing change management.
  • Service Level Agreements (SLAs). The fourth point is making sure that the governance can increase or help monitor quality of services, both in design quality and in runtime quality. That could also include performance. ... What we've seen so far is a very limited approach to governance. ... We're going to have to see a much broader expansion over the next four or five years.
  • Managing Service Lifecycles. Looking at this from a macro perspective, we need managing the cloud-computing life cycle. From the definitions of the services, through the deployment of the services, to the management of the services, to the performance of the services, to the retirement of the services, it's everything that's going on in the cloud. As those services get aggregated into larger business processes, that's going to require different set of governance characteristics.
McKendrick's top five governance rationales:

There is an issue that's looming that hasn't really been discussed or addressed yet. That is the role of governance for companies that are consuming the services versus the role of governance for companies that are providing the services. On some level, companies are going to be both consumers and providers of cloud services.
  • Provisioning Management. Companies and their IT departments will be the cloud providers internally, and there is a level of ... design-time governance issues that we've been wrestling with SOA all these years that come into play as providers. They will want to manage how much of their resources are devoted to delivery of services, and to manage the costs of supplying those services.
  • SLA Management. Companies will have to tackle SLA management, which is assuring the availability of the applications they're receiving from some outside third party. So, the whole topic of governance splits in two here, because there is going to be all this activity going on outside the firewall that needs to be discussed.
  • Service Ecology Management. A lot of companies are taking on the role of a broker or brokerage. They're picking up services from partners, distributors, and aggregators, and providing those services to specific markets. They need the ability to know what services are available in order to be able to discover and identify the assets to build the application or complete a business process. How will we go about knowing what's out there and knowing what's been embedded and tested for the organization?
  • Return on Investment (ROI). ROI is another hot button, and we need to be able to determine what services and processes are delivering the best ROI. How do we measure that? How do we capture those metrics?
  • Business Involvement. How do we get the business involved [in shaping the refining the use of services in the context of business processes]? How do we move it beyond something that IT is implementing and move it to the business domain? How do we ensure that business people are intimately involved with the process and are identifying their needs? Ultimately, it's all about governing services.
Gardner's top five governance rationales:

The road to cloud computing is increasingly paved with, or perhaps is going to be held up by a lack of, governance.
  • Managing Scale. We're going to need to scale beyond what we do with business to employee (B2E). For cloud computing, we're going to need to see a greater scale for business to business (B2B) cloud ecologies, and then ultimately business to consumer (B2C) with potentially very massive scale. New business models will demand a high scale and low margin, so the scale becomes important. In order to manage scale, you need to have governance in place. ... We're going to need to see governance on API usage, but also in what you're willing to let your APIs be used for and at what scale.
  • Federated Cloud Ecologies. We need to make this work within a large cloud ecology. Standards and neutrality at some level are going to be essential for this to happen across a larger group of participants and consumers. So with people coming and going in and out of an ecology of process, delivered via cloud services, means we need federation. That means open and shared governance mechanisms of some type. Standards and neutrality at some level are going to be essential for this to happen at that scale across a larger group of participants and consumers.
  • Keep IT Happy. My third reason is that IT is going to need to buy into this. We've heard some talk recently about doing away with IT, going around IT, or doing all of these cloud mechanisms vis-à-vis the line of business folks. I think there is a role for that, and I think it's exploratory at that level. Ultimately, for an enterprise to be successful with cloud models as a business, they're going to have to take advantage of what they already have in place in IT. They need to make it IT ready and acceptable, and that means compliance. IT should have a checklist of what needs to take place in order for their resources and assets to be used vis-à-vis outside resources or even within the organization across a shared-services environment.
  • Collect the Money. The business models that we're just starting to see well up in the marketplace around cloud are also going to require governance in order to do billing, to satisfy whether the transaction has occurred, to provision people on and off based on whether they've paid properly or they're using it properly under the conditions of a license or a SLA of some kind. This needs to be done at a very granular level. Governance is going to be essential for making money at cloud types of activities.
  • Data Access Management. Lastly, cloud-based data is going to be important. We talk about transactions, services, APIs, and applications, but data needs to be shared, not just at a batch level, but at a granular level across multiple partners. To govern the security, provisioning, and protection of data at a granular level falls back once again to governance. So, I come down on the side that governance is monumental and important to advancing cloud, and that we are still quite a ways away from [controlling access] around data.
Read a full transcript of the discussion. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Tuesday, June 9, 2009

Greenplum speeds creation of 'self-service' data warehouses with Enterprise Data Cloud release

Greenplum has charged headlong into cloud computing with this week's announcement of its Enterprise Data Cloud (EDC) Initiative, which aims to bring "self-service" provisioning to data warehousing and business analytics.

The San Mateo, Calif. company, which provides large-scale data processing and data analytics, says its new initiative, as well as the general availability of Greenplum Database 3.3, improves on costly and inflexible solutions that have dominated the market for decades. [Disclosure: Greenplum is a sponsor of BriefingsDirect podcasts.]

Greenplum's goal: To foster speedy creation of vast data warehouses by non-IT personnel in either public or private cloud configurations. The value of data warehouses and the business intelligence (BI) payoffs they provide are clear. And Greenplum is correct in identifying that creating warehouses from disparate data sources has been difficult, expensive and labor-intensive.

At the heart of the EDC initiative is a software-based platform that enables enterprises to create and manage any number of data warehouses and data marts that can be deployed across a common pool of physical, virtual, or public cloud infrastructures.

The key building blocks of the platform include:
  • Self-service provisioning: providing analysts and database administrators (DBAs) the ability to provision new data warehouses and data marts in minutes with a single click.

  • Massive scale and elastic expansion: the ability to load, store, and manage data at petabyte scale, and dynamically expand the size of the system without system downtime.

  • Highly optimized parallel database core: a parallel database that is optimized for business intelligence (BI) and analytics and that is linearly scalable.
Greenplum Database 3.3 is the latest version of the company's flagship database software, which adds a wide range of capabilities to streamline management and enhance performance. Among the enhancements aimed at DBAs and IT professionals:
  • Online system expansion: the ability to add servers to a database system and expand across the new servers while the system is online and responding to queries. Each additional server adds additional storage capacity, query performance and loading performance to the system.

  • pgAdmin III administration console: an enhanced version of pgAdmin III, which is the most popular and feature-rich open-source administration and development platform for PostgreSQL.

  • Scalability-optimized management commands: a range of enhancements to management commands, including starting and stopping the database, analyzing tables, reintegration of failed nodes into the system. This is designed to improve performance and scalability on very large systems.
Database 3.3 is supported on server hardware from a range of vendors including HP, Dell, Sun and IBM. The software is also supported for such non-production uses as development and evaluation on Mac OSX 10.5, Red Hat Enterprise Linux 5.2 or higher (32-bit) and CentOS Linux 5.2 or higher (32-bit).

As part of the EDC initiative, Greenplum is assembling an ecosystem of customers and partners who embrace this new approach and are collaborating with Greenplum to create new technologies and standards that leverage the capabilities of the EDC platform. Early participants deploying EDC platforms on Greenplum Database include Fox Interactive Media/MySpace, Zions Bancorporation and Future Group.

I think that BI vendors will want to join in allowing Greenplum, among others, to refine and advance the notion of data warehouse "middleware" layers. This takes a burden off of IT, which can focus on providing virtualized resource pools in which to deploy solutions such as Greenplum's.

As commodity hardware is used to undergird these virtualized on-premises clouds, the total costs contract. And, as we've seen with Amazon, Rackspace and others, the costs for moving data to a third-party clouds offers other potentially compelling cost advantages, even as scale issues about moving data around and security concerns are being addressed.

The automated warehouse layer approach benefits the BI vendors, as their tools and analystics engines can leverage the coalesced cloud-based data that Greenplum provides. The more and better the data, the better the BI. Cloud providers, too, may examine Greenplum with an eye to prtoviding data warehouse instances "as a service," a value-added data service opportunity to expand general cloud services.

And, of course, the biggest winners are the business analysts and business managers -- at enterprises as well as SMBs -- who can finally get the insights from massive data pools that they long for and a price they can realistically consider.

There will be a building symbiotic relationship between cloud computing and such data warehousing solutions as Greenplum's Enterprise Data Cloud. The more data that can become housed in accessible clouds, the more need to access, manage and provision additional data for analysis pay-offs.

And the more tools there are for leveraging cloud-based data, the more value there will be to moving data to clouds ... on so on. The chicken-and-egg relationship is clearly under way, with solutions providers like Greenplum offering a needed catalyst to the ramp-up process.

Monday, June 8, 2009

In need of a trigger: Report from Rational Software Conference 2009

This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

Rational Software Conference 2009 last week was supposed to be “as real as it gets,” but in the light of day proved a bit anticlimactic. A year after ushering in Jazz, a major new generation of products, Rational has not yet made the compelling business case for it. The hole at the middle of the doughnut remains not the “what” but the “why.” Rational uses the calling cry of Collaborative ALM to promote Jazz, but that is more like a call for repairing your software process as opposed to improving your core business. Collaborative might be a good term to trot out in front of the CxO, but not without a business case justifying why software development should become more collaborative.

The crux of the problem is that although Rational has omitted the term Development from its annual confab, it still speaks the language of a development tools company.

With Jazz products barely a year old if that, you wouldn’t expect there to be much of Jazz installed base yet. But in isolated conversations (our sample was hardly scientific), we heard most customers telling us that Jazz to them was just another new technology requiring new server applications, which at $25,000 - $35,000 and up are not an insignificant expense; they couldn’t understand the need of adding something like Requirements Composer, which makes it easier for business users to describe their requirements, if they already had RequisitePro for requirements management. They hear that future versions of Rational’s legacy products are going to be Jazz-based (their data stores will be migrated to the Jazz repository), but that is about as exciting to them as the prospect of another SAP version upgrade. All pain for little understood gain.

There are clear advantages to the new Jazz products, but Rational has not yet made the business case. Rational Insight, built on Cognos BI technology, provides KPIs that in many cases are over the heads of development managers. Jazz products such as Requirements Composer could theoretically stand on its own for lightweight software development processes if IBM sprinkled in the traceability that still requires RequisitePro. The new Measured Capability Improvement Framework (MCIF) productizes the gap analysis assessments that Rational has performed over the years for its clients regarding software processes, with addition of prescriptive measures that could make such assessment actionable.

But IBM Rational still has lots of pieces to put together first, like for starters figuring out how to charge. In our Ovum research we found that core precepts of SaaS including multi-tenancy and subscription pricing may not always apply to ALM.

But who in Rational is going to sell it? There is a small program management consulting group that could make a credible push, but the vast majority of Rational’s sales teams are still geared towards shorter-fuse tactical tools sales. Yet beyond the tendency of sales teams to focus on products like Build Forge (one of its better acquisitions), the company has not developed the national consulting organization it needs to do solution sells. That should have cleared the way for IBM’s Global Business Services to create a focused Jazz practice, but so far GBS’s Jazz activity is mostly ad hoc, engagement-driven. In some cases, Rational has been its own worst enemy as it talks strategic solutions at the top, while having mindlessly culled some of its most experienced process expertise for software development during last winter’s IBM Resource Action.

Besides telling Rational to do selective rehires, we’d suggest a cross-industry effort to raise the consciousness of this profession. It needs a precursor to MCIF because the market is just not ready for it yet, outside of the development shops that have awareness of frameworks like CMMi. This is missionary stuff, as organizations (and potential partners) like the International Association of Business Analysts (IIBA) are barely established (a precedent might be organizations like Catalyze that has heavy sponsorship from iRise). A logical partner might be the program management profession, which is tasked with helping CIOs effectively target their limited software development resources.

Other highlights of the conference included Rational’s long-awaited disclosure of its cloud strategy, and plans for leveraging the Telelogic acquisition to drive its push into “Smarter Products.” According to our recent research for Ovum, the cloud is transforming the software development tools business, with dozens of vendors already having made plays for offering various ALM tools as services. Before this, IBM Rational made some baby steps, such as offering hosted versions of its AppScan web security tests. It is opening technology previews or private cloud instances that could be hosted inside the firewall or virtually using preconfigured Amazon Machine Images of Rational tooling on Amazon’s EC2 raw cloud. Next year Rational will unveil public cloud offerings.

Rational’s cloud strategy is part of a broader-based strategy for IBM Software Group, which in the long run could use the cloud as the chance to, in effect, “mash up” various tools across brands to respond to specific customer pain points, such as application quality throughout the entire lifecycle including production (e.g., Requirements Composer, Quality Manager, some automated testing tools, and Tivoli ITCAM, for instance). Ironically, the use case “mashups” that are offered by Rational as cloud-based services might provide the very business use cases that are currently missing from its Jazz rollout.

But IBM Rational still has lots of pieces to put together first, like for starters figuring out how to charge. In our Ovum research we found that core precepts of SaaS including multi-tenancy and subscription pricing may not always apply to ALM.

Finally there’s the “Smarter Products” push, which is Rational’s Telelogic-based rationale to IBM’s Smarter Planet campaign. It reflects the fact that the software content in durable goods is increasing to the point where it is no longer just a control module that is bolted on; increasingly, software is defining the product. Rational’s foot in the door is that many engineered-product companies (like in aerospace) are already heavy users of Telelogic DOORS, which is well set up for tracking requirements of very complex systems, and potentially, “systems of systems” where you have a meta-control layer that governs multiple smart products or processes performed by smart products.

The devil is in the details as Rational/Telelogic has not yet established the kinds of strategic partnerships with PLM companies like Siemens, PTC or Dassault for joint product integration and go-to-market initiatives for converging application lifecycle management with its counterpart in the for managing the lifecycle of engineered products (Dassault would be a likely place to start as IBM has had a longstanding reselling arrangement) . Roles, responsibilities, and workflows have yet to be developed or templated, bestowing on the whole initiative to reality that for now every solution is a one-off. The organizations that Rational and the PLM companies are targeting are heavily silo’ed. Smarter Products as a strategy offers inviting long term growth possibilities for IBM Rational, but at the same time, requires lots of spadework first.

This guest post comes courtesy of Tony Baer's OnStrategies blog . Tony is a senior analyst at Ovum. His profile is here. You can reach him here.