Friday, June 19, 2009

Who's Architecting the Cloud?

By Ron Schmelzer

This guest post comes courtesy of ZapThink. Ron Schmelzer is a senior analyst at ZapThink. You can reach him here.

As the hype cycle for the cloud computing continues to gather steam, an increasing number of end users are starting to see the silver lining, while others are simply lost in the fog. It is clear that the debate over the definition, business model, and benefits of cloud will continue for some time, but it is also clear that the sluggish economic environment is increasing the appeal of having someone else pay for the robust infrastructure needed to run one’s applications. Yet, all this talk of leveraging cloud capabilities, or perhaps even building one’s own cloud, whether for public or private consumption, introduces thorny problems. How can we make sure that the cloud will bring us closer to the heavenly vision of IT we search for rather than a fog that hides a complex mess? Who will make sure that the cloud vision isn’t just another reinterpretation of the Software-as-a-Service (SaaS), Application Service Provider (ASP), grid and utility computing model that provided some technical answers but didn’t simplify anything for the internal organization? Who is architecting this mess?

Architecture and the Utility Services Cloud

Most of the time, when people point to practical, in-production examples of cloud computing efforts, they are talking about the sorts of utility services offered by Amazon.com, Google, Salesforce.com, and others. The Services offered in these clouds are not built with any particular application in mind, but rather whole categories of applications. For obvious reasons, these cloud providers seek to leverage economies of scale by serving the largest possible audience using a handful of highly reusable Services, where reuse is defined by usage in multiple contexts. For these cloud providers, the utility Services simultaneously provide a source of revenue as well as a platform their customers use to replace proprietary, in-house infrastructure and middleware.

Given that the emphasis of these Services is to meet the needs of a large and

For obvious reasons, these cloud providers seek to leverage economies of scale by serving the largest possible audience using a handful of highly reusable Services, where reuse is defined by usage in multiple contexts.


continuously growing audience who have diverse requirements, the utility cloud provider’s primary focus is placed on infrastructural concerns. As a result, it’s the infrastructure technologists who are in charge of this cloud. When the “architecture team” meets in these cloud providers, what problems are they aiming to solve? Business problems? Certainly not. In most cases, the architecture teams for these providers (of which we’ve been privy to a number of conversations), focus almost exclusively on technology and infrastructural concerns. Key conversations revolve around performance optimization, implementation change management, optimizing the balance between efficiency and cost, meeting reliability and uptime concerns, and addressing privacy, security, and governance issues.

Where’s the business in all this? The answer: nowhere. Where should the business be in all this? That’s a tough question to answer because without Service consumers, the cloud wouldn’t exist at all. However, it is not the goal of the cloud provider to meet any specific business requirements. Rather, the requirements are aggregated to create a business “persona” that is the focus of continual Service releases. In this manner, one could argue that there are no enterprise architects providing any value in this environment. The most pervasive form of architecture done in these environments is more akin to Information Technology Infrastructure Library (ITIL) approaches rather than any form of enterprise architecture (EA). Utility clouds are the domain of infrastructure experts, not business-IT gap bridgers or process modelers, and one could argue that this status quo will probably never change.

Architecture and the Application (Process) Cloud

However, the utility Service vision of the cloud is not the only one. Indeed, we’re starting to see the emergence of application and process clouds that provide the same infrastructural and economic benefits of clouds, but applied to process-specific concerns. These cloud providers enable the outsourcing of entire processes that run in a virtualized cloud environment as a way of handling variability in scale. For example, an insurance company can use a cloud provider's claims processing Service when their internal capacity is not sufficient to meet demand. As long as the process is Service-oriented, this approach works well and leverages the strength of the cloud's abstract infrastructure capability while staying focused on the process. This way, an organization can have its internal processes augmented by third-party cloud processes. For example, insurance clouds provide elastic capabilities for insurance applications as demand ebbs and flows. Likewise, banking, supply chain, retail, and other process-specific clouds provide cloud computing benefits for specific groups of business users.

In this environment, the cloud provider needs to balance two different, but equal concerns:

. . . the job of the enterprise architecture team is to optimize the conceptual equation of producing the smallest set of Services that meet the largest number of business processes.

infrastructural issues of the sort described above, and the challenge of meeting continuously changing business requirements. When application-specific cloud provider architect groups meet, their conversations look very different from utility Service cloud providers. Rather than focusing on infrastructural issues as they try to meet the common denominator of needs (“speeds and feeds”), the conversation usually revolves around how the team will meet new business process requirements given the existing set of Services and infrastructure. In many ways, these teams have a true EA conversation: the continuously changing and diverse business requirements on the one hand, and the technical capabilities on the other. These EA conversations invoke aspects of Agile Methodologies and EA frameworks more so than ITIL. Rather than trying to minimize the set of business processes handled by the cloud, they seek to continuously expand the universe of processes addressed.

As we often discuss in our Licensed ZapThink Architect (LZA) SOA training courses, the job of the enterprise architecture team is to optimize the conceptual equation of producing the smallest set of Services that meet the largest number of business processes. You don’t want to produce too many Services, otherwise there’s waste. Likewise, you don’t want to produce too few Services as that constrains the number of business processes you can address. As new Services are introduced, the universe of business processes addressed likewise increases. Since application / process-specific cloud providers are businesses that must justify their existence by staying focused on the business without impacting existing operations. Sounds like something all enterprise architect teams should do, no?

The ZapThink Take

In many ways, the discussion of architecture has been given short shrift in cloud computing conversations. In much the same way that the Service-Oriented Architecture (SOA) conversation degenerated into a conversation about the (often unnecessary) Enterprise Services Bus (ESB), the cloud conversation is degenerating into one about the infrastructure needed to handle scalable Service provider volume. And where is the conversation about the business process? Unless you are planning to build a general-purpose Service provider cloud to compete with the likes of Amazon.com and others, you should be focused on where the opportunity is: in the process. And to focus on the process while keeping an eye on the technology requires an enterprise architecture perspective.

The mistake that many cloud-consuming companies are making is that the cloud is giving them an excuse not to think about enterprise architecture at all.

Once again, the refrain is that SOA is not something you buy, but something you do. Perhaps we can start hearing the same mantra with cloud computing?

The thought going through the head of many a supposed architect is: “whew, thank goodness we’re putting this in the cloud so that I don’t have to invest in architecture.” Wow, what a mistake. These companies will be in for a rude awakening when they realize that all they’ve done is shifted their internal mess, which at least they have some control and visibility over, to an external mess that they have less control over. Enterprise architecture doesn’t go away simply because someone else is hosting or providing your Services. Organizations that want to have any chance of improving their agility, flexibility, reliability, and performance need to be in charge of their own architecture. There is no other option.

Given that too few cloud computing providers have your business in mind when they architect their solutions, and the ones that have a process-specific business model and approach aren’t concerned with your specific business, it lands upon the laps of enterprise architects within the organization to plan, manage, and govern their own architecture. Once again, the refrain is that SOA is not something you buy, but something you do. Perhaps we can start hearing the same mantra with cloud computing? Or will the cloud succumb to the same short-sighted, market pressure that doomed the ASP model and still plagues SaaS approaches? It’s not up to vendors to answer this question. It’s up to you … the enterprise architect. There are no short-cuts to EA.

This guest post comes courtesy of
ZapThink. Ron Schmelzer, a senior analyst at ZapThink, can be reached here.


SPECIAL PARTNER OFFER

SOA and EA Training,Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Tuesday, June 16, 2009

HP unveils financial planning and analysis solutions designed to both optimize and modernize IT operations

LAS VEGAS -- Hewlett-Packard (HP) today unveiled its new HP Financial Planning and Analysis (FP&A) solutions, aimed at recession-beleaguered IT executives who need to cut costs, prepare for a service-based future, and run their departments like a business -- all at the same time.

FP&A is part of HP’s expanding IT Financial Management (ITFM) portfolio designed to help chief information officers (CIOs) and IT managers create comprehensive financial transparency, optimize costs deeply but prudently, and newly demonstrate the business value of IT services.

In a related announcement here at the HP Software Universe conference this week, HP unveiled enhancements to its project and portfolio management (PPM) solution for planning and organizing IT investments.

HP also opened its related Tech Forum conference here this week. For the second year in a row, BriefingsDirect will cover the HP Software Universe 2009 conference through a series of podcasts, blogs, transcripts and Twitter entries. [Disclosure: HP is sponsor of BriefingsDirect podcasts.]

Follow the HP Software Universe 2009 conference on Twitter by searching on #HPSU09.

HP Project and Portfolio Management (PPM) Center 8.0 arrives as a key component in ITFM, providing integrated capabilities for IT portfolio investment management, global resource efficiencies and IT financial transparency.

“PPM popularity is on the rise as organizations align planned business investments with IT project portfolios,” said Daniel Stang, principal research analyst at Gartner, in a release.

Analysts in addition to myself are hearing consistently from IT executives that cost-optimization, cost-containment, and cost-reduction initiatives are the top priorities being driven from the business side onto IT.

The business leaders are demanding a clear understanding of all IT costs and benefits as the global recession lingers, if no longer still steeply deepening. HP’s enhanced IT planning and analysis solutions are designed to help IT executives reduce costs without jeopardizing IT's ability to support future growth when it's called for.

The recession therefore accelerates the need to reduce total IT cost through identification and elimination of wasteful operations and practices. But at the same time, IT departments need to better define and implement streamlined processes for operations -- and to show the near and far business value of any new projects.

As part of the opening keynote address here today, Andy Isherwood, Vice President and General Manager of HP Software and Solutions, said the recession compels better management of IT. CIOs need to reduce costs, yes, but they should do so without jeopardizing future growth.

Consolidating IT cut costs and saves energy by focusing on the operational inefficiencies up front. "It's about getting down and dirty, not pie in the sky solutions," said Isherwood.

Along with consolidation, IT leaders can increasingly automate and virtualize infrastructure and data centers. Combined with greater financial management, IT performance analytics, and IT resources optimization, enterprises can cut their IT operations bills while setting the stage for the new phases of advancement.

And those new benefits, said Isherwood, include using flexible sourcing, from on-house premises data centers to outsourcers like HP's EDS, as well as clouds, both on or via off premises partners like Amazon Web Services. As Ann Livermore of HP said yesterday: Everything as a service.

HP is already preparing to better manage and govern the cloud transitions with its Cloud Assure, which joins IT financial management, IT performance analytics, resource management as next major focuses for the HP Software and Solutions group.

To sum up, Isherwood said that HP's major solutions drives are around IT Management Software, Information Management Software, BI Solutions, and Communications and Media Solutions.

HP expects that after a 12-month period of operational optimization initiatives that CIOs will also seek more transformative IT functional delivery improvements, including such next-generation data center bulwarks as consolidation, automation, and virtualization.

Today's pressing IT management and architecture decisions, then, need to gain from better financial management tools, proffer IT performance analytics, and exploit IT resources optimization techniques -- for both near- and long-term benefits.

These financial performance indicator insights and disciplines for IT will also place CIOs in a better position to look at and pursue future flexible and cost-reducing sourcing options. Those are sure to include modernizing in-house legacy deployments, outsourcing to providers such as HP's EDS, and exploring a variety of burgeoning third-party cloud offerings (on premises, off premises, or managed hybrids).

Knowing the true costs and benefits of complex and often sprawling IT portfolios quickly helps improve the financial performance, while setting up the ability to meaningfully compare and contrast current with future IT deployment scenarios. Who knows if cloud computing will save money if we don't know the true costs of all-on-premises approaches?

Gaining real-time visibility into dynamic IT cost structures provides a powerful tool for reducing cost, while also maintaining and improving overall performance. Holistic visibility across an entire IT portfolio also develops the visual analytics that can help better probe for cost improvements and uncover waste.

This is where the HP planning, analysis and financial management solution comes to the rescue in terms of value, optimization priorities, and future planning comparisons.

The HP Financial Planning and Analysis product announced here today is designed to help organizations understand costs from a service-based perspective. It provides a common extract transform load (ETL) capability that can pull information from data sources, including HP PPM and asset management products as well as non-HP data sources.

Cost Explorer, a key component of FP&A, provides business intelligence (BI) capability for visualizing data that is applied to IT costs. Users are able to see data displays color-coded to help identify different dimensions and variants in costs.

HP FP&A can be run as a stand-alone or in conjunction with other HP software products such as HP Project Portfolio Management Center, HP Asset Manager and HP Configuration Management System as well as the newly enhanced version of HP Project Portfolio Management (PPM) Center 8.0.

Along with the software products, HP is also offering consulting services based on best practices, including:
  • Strategy and Advisory Services to help synthesize organizational requirements, data, process and technical gaps for developing detailed implementation roadmaps.
  • Implementation Services to provide BI services for strategic decision making including forecasting budgetary needs, quantifying the value of IT services delivered to the business, improving cost efficiency, and aligning IT resources with business needs.
  • Process Consulting and Solution Implementation Services based on the HP Service Management Reference Model help in deploying HP ITFM and HP PPM to get improved business results.
  • Best practices for Configuration Management Systems help accelerate deployment and provide a use model for customers to identify IT assets and relate them to the costs of the services delivered to the business.
Key enhancements to HP PPM Center 8.0 include:
  • IT portfolio investment management for improved alignment between IT and business with cash flow analysis that supports business reviews with actionable, real-time information.
  • HP PPM Center Mobility Access for governing IT expenditures through secure and automated checkpoints from mobile devices, which send email notifications and workflow actions to cell phones and PDAs.
  • Global resource efficiencies for managing human resources with reports and notifications in the recipient’s language.
  • Additional IT financial transparency and controls for decision support with a comprehensive financial summary that aggregates IT investment data and related analyses.
  • HP Universal Configuration Management Database (UCMDB) integration with HP PPM Center 8.0 provides advanced search capabilities for business and technical users.
  • HP Service Manager integration offers a single IT services access point, so users can access services by creating an HP PPM Center proposal from an HP Service Manager catalog item via Web services.
What's more, HP PPM is now available in a Software-as-a-Service (SaaS)-delivered solution that offers accelerated deployment. Expect a lot more from me on this subject, via podcasts and interviews with the key leaders.

HP is also offering new Software Professional Services for HP PPM 8.0, including:
  • Solution Consulting Services for PPM 8.0 providing design and implementation consulting to help customers reduce IT costs by automating enterprise-wide portfolio management via services.
  • Fast Track Deployment and Upgrades to help speed deployment of the new software.
BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.

'Everything' as a service future means transforming IT for efficiency and scale, says HP's Livermore

LAS VEGAS -- Hewlett-Packard opened its Tech Form 2009 conference here Monday evening with a portrait of a future in which everything in IT is delivered -- and perhaps consumed -- as a service.

Ann Livermore, Executive Vice President for HP's Technology Solutions Group (TSG), said the recession and technology advances have combined to offer a new era in computing, one where a hybrid of sourcing and delivery means moves all IT assets to the level of a service.

Livermore identified three mega trends now buffeting the IT landscape: Information explosion, Everything as a Service, and Data Center Transformation.

HP expects that after a 12-month period of operational optimization initiatives that CIOs will also seek more transformative IT functional delivery improvements, including such next-generation data center bulwarks as consolidation, automation, and virtualization. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

But CIOs and IT managers will also see more infrastructure, application development, applications, data, business intelligence, and IT management delivered as services, either from on-premises next-generation data centers, services abstracted from legacy systems, via outsourced IT operations and also from a growing ecology of third-party cloud providers.

In addition, Livermore said that providing such IT services, via HP's acquisition of EDS, now accounts for the majority of HP's revenues. "Services is now HP's biggest business," she said.

The current goal then for IT is to manage IT operations for cost efficiency and performance optimization while preparing for a transformation to the "everything" services future.

In a hint of a building tussle with Cisco, Livermore says much more is to come from HP in networking "equipment and solutions. "We'll be more aggressive ... we're serious," she said. Cisco has entered HP's server business turf, and HP has been providing more of Cisco's core of networking equipment to the market. A market clash is under way. Brocade, a Cisco competitor, is a major sponsor of this years Tech Form conference.

See more about what went on during the keynote in a live stream by doing a Twitter search on #HPTF.

Livermore's keynote address also emphasized energy conservation as an essential ingredient of today's IT operations. If you don't transform your data center, you'll find yourself running out of electricity in few years, she told the attendees. I believe that.

Keynote speaker Paul Miller, HP Vice President of Enterprise Servers and Storage Marketing, sees strong growth for HP in virtualization, private cloud, and "Extreme ScaleOut" products.

So much so that he introduced a new product, HP Extreme ScaleOut server, a powerful pooled resource server that can be managed as a cloud, and which helps conserve energy, space and costs. The devise is based on ProLiant SL technology, but is "skinless," meaning it fits into racks for much less weight, waste, and footprint. Mean and Green, was the message.

Furthermore, Miller says "storage as a service" is coming from HP that works like a storage area network (SAN), but with far less complexity, to works like a private cloud, with much lower total storage cost.

Lastly, Prith Banerjee, Senior Vice President and Research Director of HP Labs, provided a fascinating look at HP research efforts in eight areas:

--Digital commercial printing

--Intelligent infrastructure

--Content transformation

--Immersive interactions

--Information management

--Analytics

--Cloud

--Sustainability (ie, Green IT)

If you have a chance to watch Banerjee's presentation online, I highly recommend it.

My major take-away from the presentations was that HP, and much of the IT industry, now knows what needs to be done to make IT enter its next era. It's all pretty clear. But getting there ... that's the rub. And to fail, is to probably die as a competitive organization.

PostgreSQL delivers alternative for MySQL users wary of Oracle's Sun acquisition

Potential MySQL customers who are wary of the database's future under Oracle stewardship have a possible alternative in Postgres Plus, an open source alternative from EnterpriseDB, says that company’s CEO, Ed Boyajian.

He sees reality biting the MySQL community as a feeding frenzy in the software acquisition food chain from both Sun Microsystems' gobbling up of MySQL last year, and now Oracle's likely snapping up of Sun. “When MySQL got acquired by Sun, a lot of that community got fractured,” Boyajian told BriefingsDirect. “That fracturing started with Sun and continues with Oracle so I think that will have an impact on adoption patterns.”

He says potential MySQL customers, wary of getting “sucked into Oracle’s sales machine,” are looking at EnterpriseDB’s Postgres Plus®Advanced Server, the company’s relational database management system (RDBMS) product, which is based on the PostgreSQL open source database.

Competing with Oracle is nothing new for EnterpriseDB, which has been playing David to Oracle’s Goliath in the database market for years. Although this David has its own Goliath watching its back as IBM is an investor in and has a partnership with the Westford, Mass. company, which was founded in 2004

The latest version of Postgres Plus, being released today is touted by EnterpriseDB as “the fifth-generation of Oracle compatibility technology,” which allows Oracle customers to move applications to the EnterpriseDB database.

This version of Postgres Plus is designed to require “minimal migration effort” for Oracle customers looking for a low-cost, open source-based RDBMS as an alternative to giant vendor’s proprietary database products.

Oracle buying Sun and acquiring MySQL does have a positive side, Boyajian says.

One of the selling points for Postgres Plus is that it runs on commodity hardware and now it is being deployed in virtual and cloud environments.

“When Oracle acquires Sun and gets a great asset like MySQL it’s a great endorsement for open source software,” he said.

His company maintains a close relationship with the Postgres community, Boyajian said. Several EnterpriseDB employees are "key core members" of Postgres, he said.

One of the selling points for Postgres Plus is that it runs on commodity hardware and now it is being deployed in virtual and cloud environments.

“There are some customers that are using blade servers,” Jim Mlodgenski, EnterpriseDB's chief architect told BriefingsDirect. “For the cache servers [used heavily in social networking apps] you don’t need much horsepower as far as the CPU goes,”

Social networking sites have greater requirements for maintaining a data cache in memory rather than for CPU power, he explained. Postgres Plus offers a feature called “Infinite Cache” to support those requirements.

Some customers take advantage of the commodity prices for “one CPU and a lot of RAM,” Mlodgenski said. “Using commodity hardware at the caching layer you’re able to leverage low cost commodity hardware to cache everything, get the performance benefits of running everything in memory without investing a lot in a high-end SAN [storage area network] boxes,” the architect explained.

The cloud is also on the horizon for Postgres Plus users. “We have other people who are deploying in more virtualized environments, cloud environments,” Mlodgenski said.

He said when the product was designed several years ago it wasn’t focused on the cloud but because of its flexible architecture Postgres Plus users were able to move into cloud environments such as Amazon EC2.

BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.

Friday, June 12, 2009

Cloud grows globally: Russia, South Korea, and Malaysia join Open Cirrus Initiative

More evidence has emerged that cloud research and development is a growing worldwide phenomenon.

The Open Cirrus initiative spread across more borders this week with the addition of the Russian Academy of Sciences, South Korea’s Electronics and Telecommunications Research Institute, and the Ministry of Science, Technology and Innovation in Malaysia (MIMOS).

A global, multiple data center, open-source test bed for the advancement of cloud computing research, Open Cirrus was started last summer by HP, Intel Corp. and Yahoo! Inc. The goal is to “promote open collaboration among industry, academia and governments by removing the financial and logistical barriers to research in data-intensive, Internet-scale computing,” the founders say.

Prior to announcing the three newest members at this week’s Open Cirrus Summit in Palo Alto, Calif., the founders had already attracted researchers from the University of Illinois at Urbana Champaign, the Karlsruhe Institute of Technology, Germany, and the Infocomm Development Authority, Singapore.

Noting that IDC predicts that cloud computing will become a $42 billion market by 2012, rival IBM announced its own Blue Cloud initiative earlier this year and in April opened its first cloud computing laboratory in Hong Kong. IBM is also ramping up its PaaS offerings.

HP also has developed Cloud Assure to help make any moves to cloud models mission critical in nature. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Not to be left out, Oracle Corp. is refining its grid middleware into cloud software products and is partnering with Amazon Web Services, one of the early cloud pioneers.

And, of course, the other 900-pound gorilla, Microsoft, has its Azure initiative although it is not entirely clear what shape that cloud will take.

So as vaudeville comic Jimmy Durante used to say when the stage got crowded: “Everybody wants to get into the act.”

The HP, Intel, Yahoo! initiative is impressive not only for the membership it is attracting but for the seriousness and scope of the Open Cirrus approach.

With a growing membership list, the Open Cirrus community offers researchers worldwide “access to new approaches and skill sets that will enable them to more quickly realize the full potential of cloud computing,” according to this week’s announcement. The new members plan to host additional test bed research sites, expanding Open Cirrus to nine locations, “creating the most geographically diverse cloud computing test bed currently available to researchers.”

This expands the cloud test bed to an “unprecedented scale,” according to Prith Banerjee, senior vice president of research at HP and director of HP Labs. He sees the Open Cirrus collaboration with academia, government and industry as “vital in charting the course for the future of cloud computing in which everything will be delivered as a service.”

The new members bring impressive resources to Open Cirrus.

The Russian Academy of Sciences, the first Eastern European institution to join Open Cirrus, provides R&D from three of its own organizations:
  • Institute for System Programming (ISP), which will conduct fundamental scientific research and applications in the field of system programming.

  • Joint SuperComputer Center (JSCC), which will engage in the processing of large arrays of biological data, nanotechnology, 3D modeling and other applications, and port them to cloud infrastructure.

  • Russian Research Center Kurchatov Institute, which will explore how cloud computing is different from other technologies, and apply its techniques for large-scale data processing.
South Korea’s Electronics and Telecommunications Research Institute plans to conduct research and development on the management architecture and content retrieval of massive data sets.

MIMOS in Malaysia plans to develop a national cloud computing platform to deploy services throughout Malaysia, focusing on enabling services through software, security frameworks and mobile interactivity, as well as testing new cloud tools and methodologies.

Andrew Chien, vice president and director of Intel Research sees these added resources and projects creating a critical mass for “our vision of an open-source cloud stack as a strong, large-scale platform for research and development.”

BriefingsDirect contributor Rich Seeley provided research and editorial assistance to BriefingsDirect on this post. He can be reached at RichSeeley@aol.com.

Thursday, June 11, 2009

Analysts define growing requirements for how governance needs to support corporate adoption of cloud computing

Read a full transcript of the discussion. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 42. Our latest topic centers on governance as a requirement and an enabler for cloud computing.

Our panel of IT analysts discusses the emerging requirements for a new and larger definition of governance. It's more than IT governance, or service-oriented architecture (SOA) governance. The goal is really more about extended enterprise processes, resource consumption, and resource-allocation governance.

In other words, "total services governance." Any meaningful move to cloud-computing adoption, certainly that which aligns and coexists with existing enterprise IT, will need to have such total governance in place. Already, we see a lot of evidence that the IT vendor community and the cloud providers themselves recognize the need for this pending market need and requirement for additional governance.

So listen then as we go round-robin with our IT analyst panelists on their top five reasons why service governance is critical and mandatory for enterprises to properly and safely modernize and prosper vis-à-vis cloud computing: David A. Kelly, president of Upside Research; Joe McKendrick, independent analyst and ZDNet blogger, and Ron Schmelzer, senior analyst at ZapThink. Our discussion is hosted and moderated by me, BriefingsDirect's Dana Gardner.

Here are some excerpts ...
Schmelzer's top four governance rationales:

At ZapThink we just did a survey of the various topics that people are interested in for education, training, and stuff like that. The number one thing that people came back with was governance.
  • Control. So first reason to use governance, to prevent chaos. ... You want the benefit of loose coupling. That is, you want the benefit of being able to take any service and compose it with any other service without necessarily having to get the service provider involved. ... But the problem is how to prevent people from combining these services in ways that provide unpredictable or undesirable results. A lot of the efforts in governance from the runtime prevents that unpredictability.
  • Design Parameters. Two, then there is the design-time thing. How do you make sure services are provided in a reliable predictable way? People want to create services. Just because you can build a service doesn't mean that your service looks like somebody else's service. How do you prevent issues of incompatibility? How do you prevent issues of different levels of compliance?
  • Policy Adherence. Of course, the third one is around policy. How do you make sure that the various services comply with the various corporate policies, runtime policies, IT policies, whatever those policies are?
  • Reliability. To add a fourth, people are starting to think more and more about governance, because we see the penalty for what happens when IT fails. People don't want to be consuming stuff from the cloud or putting stuff into a cloud and risking the fact that the cloud may not be available or the service of the cloud may not be available. They need to have contingency plans, but IT contingency plans are a form of governance.
Kelly's top five governance rationales:

At one level, what we're going to see in cloud computing and governance is a pretty straightforward extension of what you've seen in terms of SOA governance and the bottom-up from the services governance area. As you said, it gets interesting when you start to up-level it from individual services into the business processes and start talking about how those are going to be deployed in the cloud.
  • Focus on Business Goals. My first point is one of the key areas where governance is critical for the cloud, and that is ensuring that you're connecting the business goals with those cloud services. As services move out to the cloud, there's a larger perspective and with it the potential for greater disruption.
  • Ensuring Compliance. [Governance] is going to be the initial driver that you're going to see in the cloud in terms of compliance for data security, privacy, and those types of things. Can the consumers trust the services that they're interacting with, and can the providers provide some kind of assurance in terms of governance for the data, the processes, and an overall compliance of the services they're delivering?
  • Consistent Change Management. With cloud, you have a very different environment than most IT organizations are used to. You've got a completely different set of change-management issues, although they are consistent to some extent with what we've seen in SOA. You need to both maintain the services, and make sure they don't cause problems when you're doing change management.
  • Service Level Agreements (SLAs). The fourth point is making sure that the governance can increase or help monitor quality of services, both in design quality and in runtime quality. That could also include performance. ... What we've seen so far is a very limited approach to governance. ... We're going to have to see a much broader expansion over the next four or five years.
  • Managing Service Lifecycles. Looking at this from a macro perspective, we need managing the cloud-computing life cycle. From the definitions of the services, through the deployment of the services, to the management of the services, to the performance of the services, to the retirement of the services, it's everything that's going on in the cloud. As those services get aggregated into larger business processes, that's going to require different set of governance characteristics.
McKendrick's top five governance rationales:

There is an issue that's looming that hasn't really been discussed or addressed yet. That is the role of governance for companies that are consuming the services versus the role of governance for companies that are providing the services. On some level, companies are going to be both consumers and providers of cloud services.
  • Provisioning Management. Companies and their IT departments will be the cloud providers internally, and there is a level of ... design-time governance issues that we've been wrestling with SOA all these years that come into play as providers. They will want to manage how much of their resources are devoted to delivery of services, and to manage the costs of supplying those services.
  • SLA Management. Companies will have to tackle SLA management, which is assuring the availability of the applications they're receiving from some outside third party. So, the whole topic of governance splits in two here, because there is going to be all this activity going on outside the firewall that needs to be discussed.
  • Service Ecology Management. A lot of companies are taking on the role of a broker or brokerage. They're picking up services from partners, distributors, and aggregators, and providing those services to specific markets. They need the ability to know what services are available in order to be able to discover and identify the assets to build the application or complete a business process. How will we go about knowing what's out there and knowing what's been embedded and tested for the organization?
  • Return on Investment (ROI). ROI is another hot button, and we need to be able to determine what services and processes are delivering the best ROI. How do we measure that? How do we capture those metrics?
  • Business Involvement. How do we get the business involved [in shaping the refining the use of services in the context of business processes]? How do we move it beyond something that IT is implementing and move it to the business domain? How do we ensure that business people are intimately involved with the process and are identifying their needs? Ultimately, it's all about governing services.
Gardner's top five governance rationales:

The road to cloud computing is increasingly paved with, or perhaps is going to be held up by a lack of, governance.
  • Managing Scale. We're going to need to scale beyond what we do with business to employee (B2E). For cloud computing, we're going to need to see a greater scale for business to business (B2B) cloud ecologies, and then ultimately business to consumer (B2C) with potentially very massive scale. New business models will demand a high scale and low margin, so the scale becomes important. In order to manage scale, you need to have governance in place. ... We're going to need to see governance on API usage, but also in what you're willing to let your APIs be used for and at what scale.
  • Federated Cloud Ecologies. We need to make this work within a large cloud ecology. Standards and neutrality at some level are going to be essential for this to happen across a larger group of participants and consumers. So with people coming and going in and out of an ecology of process, delivered via cloud services, means we need federation. That means open and shared governance mechanisms of some type. Standards and neutrality at some level are going to be essential for this to happen at that scale across a larger group of participants and consumers.
  • Keep IT Happy. My third reason is that IT is going to need to buy into this. We've heard some talk recently about doing away with IT, going around IT, or doing all of these cloud mechanisms vis-à-vis the line of business folks. I think there is a role for that, and I think it's exploratory at that level. Ultimately, for an enterprise to be successful with cloud models as a business, they're going to have to take advantage of what they already have in place in IT. They need to make it IT ready and acceptable, and that means compliance. IT should have a checklist of what needs to take place in order for their resources and assets to be used vis-à-vis outside resources or even within the organization across a shared-services environment.
  • Collect the Money. The business models that we're just starting to see well up in the marketplace around cloud are also going to require governance in order to do billing, to satisfy whether the transaction has occurred, to provision people on and off based on whether they've paid properly or they're using it properly under the conditions of a license or a SLA of some kind. This needs to be done at a very granular level. Governance is going to be essential for making money at cloud types of activities.
  • Data Access Management. Lastly, cloud-based data is going to be important. We talk about transactions, services, APIs, and applications, but data needs to be shared, not just at a batch level, but at a granular level across multiple partners. To govern the security, provisioning, and protection of data at a granular level falls back once again to governance. So, I come down on the side that governance is monumental and important to advancing cloud, and that we are still quite a ways away from [controlling access] around data.
Read a full transcript of the discussion. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Tuesday, June 9, 2009

Greenplum speeds creation of 'self-service' data warehouses with Enterprise Data Cloud release

Greenplum has charged headlong into cloud computing with this week's announcement of its Enterprise Data Cloud (EDC) Initiative, which aims to bring "self-service" provisioning to data warehousing and business analytics.

The San Mateo, Calif. company, which provides large-scale data processing and data analytics, says its new initiative, as well as the general availability of Greenplum Database 3.3, improves on costly and inflexible solutions that have dominated the market for decades. [Disclosure: Greenplum is a sponsor of BriefingsDirect podcasts.]

Greenplum's goal: To foster speedy creation of vast data warehouses by non-IT personnel in either public or private cloud configurations. The value of data warehouses and the business intelligence (BI) payoffs they provide are clear. And Greenplum is correct in identifying that creating warehouses from disparate data sources has been difficult, expensive and labor-intensive.

At the heart of the EDC initiative is a software-based platform that enables enterprises to create and manage any number of data warehouses and data marts that can be deployed across a common pool of physical, virtual, or public cloud infrastructures.

The key building blocks of the platform include:
  • Self-service provisioning: providing analysts and database administrators (DBAs) the ability to provision new data warehouses and data marts in minutes with a single click.

  • Massive scale and elastic expansion: the ability to load, store, and manage data at petabyte scale, and dynamically expand the size of the system without system downtime.

  • Highly optimized parallel database core: a parallel database that is optimized for business intelligence (BI) and analytics and that is linearly scalable.
Greenplum Database 3.3 is the latest version of the company's flagship database software, which adds a wide range of capabilities to streamline management and enhance performance. Among the enhancements aimed at DBAs and IT professionals:
  • Online system expansion: the ability to add servers to a database system and expand across the new servers while the system is online and responding to queries. Each additional server adds additional storage capacity, query performance and loading performance to the system.

  • pgAdmin III administration console: an enhanced version of pgAdmin III, which is the most popular and feature-rich open-source administration and development platform for PostgreSQL.

  • Scalability-optimized management commands: a range of enhancements to management commands, including starting and stopping the database, analyzing tables, reintegration of failed nodes into the system. This is designed to improve performance and scalability on very large systems.
Database 3.3 is supported on server hardware from a range of vendors including HP, Dell, Sun and IBM. The software is also supported for such non-production uses as development and evaluation on Mac OSX 10.5, Red Hat Enterprise Linux 5.2 or higher (32-bit) and CentOS Linux 5.2 or higher (32-bit).

As part of the EDC initiative, Greenplum is assembling an ecosystem of customers and partners who embrace this new approach and are collaborating with Greenplum to create new technologies and standards that leverage the capabilities of the EDC platform. Early participants deploying EDC platforms on Greenplum Database include Fox Interactive Media/MySpace, Zions Bancorporation and Future Group.

I think that BI vendors will want to join in allowing Greenplum, among others, to refine and advance the notion of data warehouse "middleware" layers. This takes a burden off of IT, which can focus on providing virtualized resource pools in which to deploy solutions such as Greenplum's.

As commodity hardware is used to undergird these virtualized on-premises clouds, the total costs contract. And, as we've seen with Amazon, Rackspace and others, the costs for moving data to a third-party clouds offers other potentially compelling cost advantages, even as scale issues about moving data around and security concerns are being addressed.

The automated warehouse layer approach benefits the BI vendors, as their tools and analystics engines can leverage the coalesced cloud-based data that Greenplum provides. The more and better the data, the better the BI. Cloud providers, too, may examine Greenplum with an eye to prtoviding data warehouse instances "as a service," a value-added data service opportunity to expand general cloud services.

And, of course, the biggest winners are the business analysts and business managers -- at enterprises as well as SMBs -- who can finally get the insights from massive data pools that they long for and a price they can realistically consider.

There will be a building symbiotic relationship between cloud computing and such data warehousing solutions as Greenplum's Enterprise Data Cloud. The more data that can become housed in accessible clouds, the more need to access, manage and provision additional data for analysis pay-offs.

And the more tools there are for leveraging cloud-based data, the more value there will be to moving data to clouds ... on so on. The chicken-and-egg relationship is clearly under way, with solutions providers like Greenplum offering a needed catalyst to the ramp-up process.

Monday, June 8, 2009

In need of a trigger: Report from Rational Software Conference 2009

This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

Rational Software Conference 2009 last week was supposed to be “as real as it gets,” but in the light of day proved a bit anticlimactic. A year after ushering in Jazz, a major new generation of products, Rational has not yet made the compelling business case for it. The hole at the middle of the doughnut remains not the “what” but the “why.” Rational uses the calling cry of Collaborative ALM to promote Jazz, but that is more like a call for repairing your software process as opposed to improving your core business. Collaborative might be a good term to trot out in front of the CxO, but not without a business case justifying why software development should become more collaborative.

The crux of the problem is that although Rational has omitted the term Development from its annual confab, it still speaks the language of a development tools company.

With Jazz products barely a year old if that, you wouldn’t expect there to be much of Jazz installed base yet. But in isolated conversations (our sample was hardly scientific), we heard most customers telling us that Jazz to them was just another new technology requiring new server applications, which at $25,000 - $35,000 and up are not an insignificant expense; they couldn’t understand the need of adding something like Requirements Composer, which makes it easier for business users to describe their requirements, if they already had RequisitePro for requirements management. They hear that future versions of Rational’s legacy products are going to be Jazz-based (their data stores will be migrated to the Jazz repository), but that is about as exciting to them as the prospect of another SAP version upgrade. All pain for little understood gain.

There are clear advantages to the new Jazz products, but Rational has not yet made the business case. Rational Insight, built on Cognos BI technology, provides KPIs that in many cases are over the heads of development managers. Jazz products such as Requirements Composer could theoretically stand on its own for lightweight software development processes if IBM sprinkled in the traceability that still requires RequisitePro. The new Measured Capability Improvement Framework (MCIF) productizes the gap analysis assessments that Rational has performed over the years for its clients regarding software processes, with addition of prescriptive measures that could make such assessment actionable.

But IBM Rational still has lots of pieces to put together first, like for starters figuring out how to charge. In our Ovum research we found that core precepts of SaaS including multi-tenancy and subscription pricing may not always apply to ALM.

But who in Rational is going to sell it? There is a small program management consulting group that could make a credible push, but the vast majority of Rational’s sales teams are still geared towards shorter-fuse tactical tools sales. Yet beyond the tendency of sales teams to focus on products like Build Forge (one of its better acquisitions), the company has not developed the national consulting organization it needs to do solution sells. That should have cleared the way for IBM’s Global Business Services to create a focused Jazz practice, but so far GBS’s Jazz activity is mostly ad hoc, engagement-driven. In some cases, Rational has been its own worst enemy as it talks strategic solutions at the top, while having mindlessly culled some of its most experienced process expertise for software development during last winter’s IBM Resource Action.

Besides telling Rational to do selective rehires, we’d suggest a cross-industry effort to raise the consciousness of this profession. It needs a precursor to MCIF because the market is just not ready for it yet, outside of the development shops that have awareness of frameworks like CMMi. This is missionary stuff, as organizations (and potential partners) like the International Association of Business Analysts (IIBA) are barely established (a precedent might be organizations like Catalyze that has heavy sponsorship from iRise). A logical partner might be the program management profession, which is tasked with helping CIOs effectively target their limited software development resources.

Other highlights of the conference included Rational’s long-awaited disclosure of its cloud strategy, and plans for leveraging the Telelogic acquisition to drive its push into “Smarter Products.” According to our recent research for Ovum, the cloud is transforming the software development tools business, with dozens of vendors already having made plays for offering various ALM tools as services. Before this, IBM Rational made some baby steps, such as offering hosted versions of its AppScan web security tests. It is opening technology previews or private cloud instances that could be hosted inside the firewall or virtually using preconfigured Amazon Machine Images of Rational tooling on Amazon’s EC2 raw cloud. Next year Rational will unveil public cloud offerings.

Rational’s cloud strategy is part of a broader-based strategy for IBM Software Group, which in the long run could use the cloud as the chance to, in effect, “mash up” various tools across brands to respond to specific customer pain points, such as application quality throughout the entire lifecycle including production (e.g., Requirements Composer, Quality Manager, some automated testing tools, and Tivoli ITCAM, for instance). Ironically, the use case “mashups” that are offered by Rational as cloud-based services might provide the very business use cases that are currently missing from its Jazz rollout.

But IBM Rational still has lots of pieces to put together first, like for starters figuring out how to charge. In our Ovum research we found that core precepts of SaaS including multi-tenancy and subscription pricing may not always apply to ALM.

Finally there’s the “Smarter Products” push, which is Rational’s Telelogic-based rationale to IBM’s Smarter Planet campaign. It reflects the fact that the software content in durable goods is increasing to the point where it is no longer just a control module that is bolted on; increasingly, software is defining the product. Rational’s foot in the door is that many engineered-product companies (like in aerospace) are already heavy users of Telelogic DOORS, which is well set up for tracking requirements of very complex systems, and potentially, “systems of systems” where you have a meta-control layer that governs multiple smart products or processes performed by smart products.

The devil is in the details as Rational/Telelogic has not yet established the kinds of strategic partnerships with PLM companies like Siemens, PTC or Dassault for joint product integration and go-to-market initiatives for converging application lifecycle management with its counterpart in the for managing the lifecycle of engineered products (Dassault would be a likely place to start as IBM has had a longstanding reselling arrangement) . Roles, responsibilities, and workflows have yet to be developed or templated, bestowing on the whole initiative to reality that for now every solution is a one-off. The organizations that Rational and the PLM companies are targeting are heavily silo’ed. Smarter Products as a strategy offers inviting long term growth possibilities for IBM Rational, but at the same time, requires lots of spadework first.

This guest post comes courtesy of Tony Baer's OnStrategies blog . Tony is a senior analyst at Ovum. His profile is here. You can reach him here.