Tuesday, June 9, 2009

Greenplum speeds creation of 'self-service' data warehouses with Enterprise Data Cloud release

Greenplum has charged headlong into cloud computing with this week's announcement of its Enterprise Data Cloud (EDC) Initiative, which aims to bring "self-service" provisioning to data warehousing and business analytics.

The San Mateo, Calif. company, which provides large-scale data processing and data analytics, says its new initiative, as well as the general availability of Greenplum Database 3.3, improves on costly and inflexible solutions that have dominated the market for decades. [Disclosure: Greenplum is a sponsor of BriefingsDirect podcasts.]

Greenplum's goal: To foster speedy creation of vast data warehouses by non-IT personnel in either public or private cloud configurations. The value of data warehouses and the business intelligence (BI) payoffs they provide are clear. And Greenplum is correct in identifying that creating warehouses from disparate data sources has been difficult, expensive and labor-intensive.

At the heart of the EDC initiative is a software-based platform that enables enterprises to create and manage any number of data warehouses and data marts that can be deployed across a common pool of physical, virtual, or public cloud infrastructures.

The key building blocks of the platform include:
  • Self-service provisioning: providing analysts and database administrators (DBAs) the ability to provision new data warehouses and data marts in minutes with a single click.

  • Massive scale and elastic expansion: the ability to load, store, and manage data at petabyte scale, and dynamically expand the size of the system without system downtime.

  • Highly optimized parallel database core: a parallel database that is optimized for business intelligence (BI) and analytics and that is linearly scalable.
Greenplum Database 3.3 is the latest version of the company's flagship database software, which adds a wide range of capabilities to streamline management and enhance performance. Among the enhancements aimed at DBAs and IT professionals:
  • Online system expansion: the ability to add servers to a database system and expand across the new servers while the system is online and responding to queries. Each additional server adds additional storage capacity, query performance and loading performance to the system.

  • pgAdmin III administration console: an enhanced version of pgAdmin III, which is the most popular and feature-rich open-source administration and development platform for PostgreSQL.

  • Scalability-optimized management commands: a range of enhancements to management commands, including starting and stopping the database, analyzing tables, reintegration of failed nodes into the system. This is designed to improve performance and scalability on very large systems.
Database 3.3 is supported on server hardware from a range of vendors including HP, Dell, Sun and IBM. The software is also supported for such non-production uses as development and evaluation on Mac OSX 10.5, Red Hat Enterprise Linux 5.2 or higher (32-bit) and CentOS Linux 5.2 or higher (32-bit).

As part of the EDC initiative, Greenplum is assembling an ecosystem of customers and partners who embrace this new approach and are collaborating with Greenplum to create new technologies and standards that leverage the capabilities of the EDC platform. Early participants deploying EDC platforms on Greenplum Database include Fox Interactive Media/MySpace, Zions Bancorporation and Future Group.

I think that BI vendors will want to join in allowing Greenplum, among others, to refine and advance the notion of data warehouse "middleware" layers. This takes a burden off of IT, which can focus on providing virtualized resource pools in which to deploy solutions such as Greenplum's.

As commodity hardware is used to undergird these virtualized on-premises clouds, the total costs contract. And, as we've seen with Amazon, Rackspace and others, the costs for moving data to a third-party clouds offers other potentially compelling cost advantages, even as scale issues about moving data around and security concerns are being addressed.

The automated warehouse layer approach benefits the BI vendors, as their tools and analystics engines can leverage the coalesced cloud-based data that Greenplum provides. The more and better the data, the better the BI. Cloud providers, too, may examine Greenplum with an eye to prtoviding data warehouse instances "as a service," a value-added data service opportunity to expand general cloud services.

And, of course, the biggest winners are the business analysts and business managers -- at enterprises as well as SMBs -- who can finally get the insights from massive data pools that they long for and a price they can realistically consider.

There will be a building symbiotic relationship between cloud computing and such data warehousing solutions as Greenplum's Enterprise Data Cloud. The more data that can become housed in accessible clouds, the more need to access, manage and provision additional data for analysis pay-offs.

And the more tools there are for leveraging cloud-based data, the more value there will be to moving data to clouds ... on so on. The chicken-and-egg relationship is clearly under way, with solutions providers like Greenplum offering a needed catalyst to the ramp-up process.

Monday, June 8, 2009

In need of a trigger: Report from Rational Software Conference 2009

This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

Rational Software Conference 2009 last week was supposed to be “as real as it gets,” but in the light of day proved a bit anticlimactic. A year after ushering in Jazz, a major new generation of products, Rational has not yet made the compelling business case for it. The hole at the middle of the doughnut remains not the “what” but the “why.” Rational uses the calling cry of Collaborative ALM to promote Jazz, but that is more like a call for repairing your software process as opposed to improving your core business. Collaborative might be a good term to trot out in front of the CxO, but not without a business case justifying why software development should become more collaborative.

The crux of the problem is that although Rational has omitted the term Development from its annual confab, it still speaks the language of a development tools company.

With Jazz products barely a year old if that, you wouldn’t expect there to be much of Jazz installed base yet. But in isolated conversations (our sample was hardly scientific), we heard most customers telling us that Jazz to them was just another new technology requiring new server applications, which at $25,000 - $35,000 and up are not an insignificant expense; they couldn’t understand the need of adding something like Requirements Composer, which makes it easier for business users to describe their requirements, if they already had RequisitePro for requirements management. They hear that future versions of Rational’s legacy products are going to be Jazz-based (their data stores will be migrated to the Jazz repository), but that is about as exciting to them as the prospect of another SAP version upgrade. All pain for little understood gain.

There are clear advantages to the new Jazz products, but Rational has not yet made the business case. Rational Insight, built on Cognos BI technology, provides KPIs that in many cases are over the heads of development managers. Jazz products such as Requirements Composer could theoretically stand on its own for lightweight software development processes if IBM sprinkled in the traceability that still requires RequisitePro. The new Measured Capability Improvement Framework (MCIF) productizes the gap analysis assessments that Rational has performed over the years for its clients regarding software processes, with addition of prescriptive measures that could make such assessment actionable.

But IBM Rational still has lots of pieces to put together first, like for starters figuring out how to charge. In our Ovum research we found that core precepts of SaaS including multi-tenancy and subscription pricing may not always apply to ALM.

But who in Rational is going to sell it? There is a small program management consulting group that could make a credible push, but the vast majority of Rational’s sales teams are still geared towards shorter-fuse tactical tools sales. Yet beyond the tendency of sales teams to focus on products like Build Forge (one of its better acquisitions), the company has not developed the national consulting organization it needs to do solution sells. That should have cleared the way for IBM’s Global Business Services to create a focused Jazz practice, but so far GBS’s Jazz activity is mostly ad hoc, engagement-driven. In some cases, Rational has been its own worst enemy as it talks strategic solutions at the top, while having mindlessly culled some of its most experienced process expertise for software development during last winter’s IBM Resource Action.

Besides telling Rational to do selective rehires, we’d suggest a cross-industry effort to raise the consciousness of this profession. It needs a precursor to MCIF because the market is just not ready for it yet, outside of the development shops that have awareness of frameworks like CMMi. This is missionary stuff, as organizations (and potential partners) like the International Association of Business Analysts (IIBA) are barely established (a precedent might be organizations like Catalyze that has heavy sponsorship from iRise). A logical partner might be the program management profession, which is tasked with helping CIOs effectively target their limited software development resources.

Other highlights of the conference included Rational’s long-awaited disclosure of its cloud strategy, and plans for leveraging the Telelogic acquisition to drive its push into “Smarter Products.” According to our recent research for Ovum, the cloud is transforming the software development tools business, with dozens of vendors already having made plays for offering various ALM tools as services. Before this, IBM Rational made some baby steps, such as offering hosted versions of its AppScan web security tests. It is opening technology previews or private cloud instances that could be hosted inside the firewall or virtually using preconfigured Amazon Machine Images of Rational tooling on Amazon’s EC2 raw cloud. Next year Rational will unveil public cloud offerings.

Rational’s cloud strategy is part of a broader-based strategy for IBM Software Group, which in the long run could use the cloud as the chance to, in effect, “mash up” various tools across brands to respond to specific customer pain points, such as application quality throughout the entire lifecycle including production (e.g., Requirements Composer, Quality Manager, some automated testing tools, and Tivoli ITCAM, for instance). Ironically, the use case “mashups” that are offered by Rational as cloud-based services might provide the very business use cases that are currently missing from its Jazz rollout.

But IBM Rational still has lots of pieces to put together first, like for starters figuring out how to charge. In our Ovum research we found that core precepts of SaaS including multi-tenancy and subscription pricing may not always apply to ALM.

Finally there’s the “Smarter Products” push, which is Rational’s Telelogic-based rationale to IBM’s Smarter Planet campaign. It reflects the fact that the software content in durable goods is increasing to the point where it is no longer just a control module that is bolted on; increasingly, software is defining the product. Rational’s foot in the door is that many engineered-product companies (like in aerospace) are already heavy users of Telelogic DOORS, which is well set up for tracking requirements of very complex systems, and potentially, “systems of systems” where you have a meta-control layer that governs multiple smart products or processes performed by smart products.

The devil is in the details as Rational/Telelogic has not yet established the kinds of strategic partnerships with PLM companies like Siemens, PTC or Dassault for joint product integration and go-to-market initiatives for converging application lifecycle management with its counterpart in the for managing the lifecycle of engineered products (Dassault would be a likely place to start as IBM has had a longstanding reselling arrangement) . Roles, responsibilities, and workflows have yet to be developed or templated, bestowing on the whole initiative to reality that for now every solution is a one-off. The organizations that Rational and the PLM companies are targeting are heavily silo’ed. Smarter Products as a strategy offers inviting long term growth possibilities for IBM Rational, but at the same time, requires lots of spadework first.

This guest post comes courtesy of Tony Baer's OnStrategies blog . Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

Wednesday, June 3, 2009

TIBCO takes PaaS plus integration capabilities to global enterprises via their choice of clouds

Combining platform as a service (PaaS) with a built-in ability to integrate and manage hybrid cloud deployments, TIBCO Software today threw its hat in the cloud computing ring by taking its middleware and Java development and deployment platforms to new heights.

Coinciding with the JavaOne conference and coming on the heels of other PaaS announcements this week, TIBCO debuted TIBCO Silver via an online virtual conference. While general availability of Silver is not due until 2010, private beta launches begin this month, as the start of a rolling series of expanding beta launches this year. [Disclosure: TIBCO is a sponsor of BriefingsDirect Analyst Insights Edition podcasts.]

TIBCO's take on PaaS is notable for its emphasis on global 2000 enterprises, with a target on custom business applications, and with a largely automated means to integrate cloud applications and processes with on-premises IT resources and data. Silver is also working to appeal to corporate developers via initial support of Java, Spring, POJO, and Ruby. Later, Silver will add support for Python, Pearl, C, C++, and .NET. That covers a lot of development territory.

As for deployment, TIBCO is starting out on Amazon EC2, but will provide ease in portability for applications and services built with Silver onto other popular clouds provider offerings. The goal is to provide a universal middleware and runtime platform layer -- as a service or as a private install -- that can accommodate mainstream corporate developers with the choice of any major cloud and any major tool and framework, said Rourke McNamara, product marketing director at TIBCO.

"ISVs are welcome, but we're focusing on global IT with Silver," said McNamara.

Relying heavily on the Active Matrix platform, TIBCO is making Silver "self-aware" for cloud use by largely automating provisioning and elastic accommodation of required applications support infrastructure. With the emphasis on enterprises, TIBCO is also building in governance, security, and the ability to meter services based on policies and service level agreements, they said.

"Because it is self-aware, TIBCO Silver manages the performance of the application and dynamically deploys additional resources to ensure the SLA is met," said TIBCO, in a release. TIBCO Silver manages the performance of the application and dynamically deploys additional resources to ensure the SLA is met, said the Palo Alto, CA-based supplier.

Pricing and the business model around TIBCO Silver have not been finalized.

What's more, Silver is leveraging TIBCO's heritage in complex event processing (CEP), SCA composition, BPEL-based orchestration and SOA governance to enhance the automation of application performance and extensibility, even while running in third-party clouds, sais McNamara. TIBCO's CEP engine is embedded into Silver, to allow for policy-based rules to manage how cloud-based applications can be accessed, used, metered and scaled based on a variety of use cases and business process variables.

[UPDATE: At today's live online conference Matt Quinn, Vice President of Product Management and Strategy, TIBCO, showed demonstrations of using SIlver with Ruby on Rails and automation of cloud deployments.

Subject-based addressing in Silver allows for information flows, like mashups, but extends publishing into app dev. Cool.

Silver has three parts: design studio, "intelligent" deployment, and administrator.

Studio allows for app and/or services from model to configuration. Allows for highly visual, or for tools and coding choices based on developer preferences. Composition comes from outside tools, but linking of components can be automated via Silver.

Governance gets baked in via Silver Center Console and Administrator, says Quinn. The steps seem pretty seamless through the steps of development, deployment and configuration.

Quinn wraps up by showing how SOA middleware when applied to cloud provides the best of SOA with the economics and RAD that customers want. SOA plus cloud get the job done.

Werner Vogels, CTO, Amazon.com, appearing at TIBCO's live event (follow on Twitter at #TIBCOSilver).]

HP tackles a variety of current IT challenges and advances in free three-day 'virtual conference'

Given the current economic downturn and tightening budgets, frugal is good and free is better. Hewlett-Packard (HP) is addressing the new IT value era with a series of complimentary international virtual events that will give IT professionals online access to briefings on business and technology trends from HP executives and outside experts.

Next week, June 8-10, HP will kick off the series with the HP Solutions Virtual Event for Europe, the Middle East, and Africa (EMEA). The three-day session will feature 30 breakout sessions, seminars, presentations and demo theater presentations. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Registration for the even is free and, because it's presented entirely online, there are no travel expenses or out-of-office time involved. Also, the full conference will be on replay until September. Future virtual events will be offered for Asia Pacific and Japan, June 16-18, and the Americas, July 28-30.

Next week's breakouts will include four main IT themes -- Data Center Transformation, Service Management, Information Management, and Applications and SOA Transformation -- as well as two leadership themes -- green IT and cloud computing. The virtual presentation will also include chat sessions with the many prominent speakers.

Among the topics to be covered are such current concerns as:
  • Data center initiatives

  • Rethinking virtualization

  • IT energy efficiency

  • The new era in service management

  • ITIL v3

  • Surviving in a world of hyper disruption

  • Cloud and your business technology ecosystem

  • Platforms for tomorrow's business technology

The full list of sessions (posted in Central European Time) is available on the virtual event Web site. Participants are free to attend for one session, one day, or the entire three-day event. These presentations and knowledge resources are not just for HP users, they make sense for the vast HP partner community and full ecology of related providers.

The speakers include a who's who of HP technology thought leaders, including many who are familiar to BriefingsDirect readers and listeners. These include John Bennett, Bob Meyer, Rebecca Lawson, Russ Daniels, Lance Knowlton, and Paul Evans, all of whom have appeared in BriefingsDirect podcasts.

For those interested, HP is providing an online demo that can be accessed prior to the event. The demo is available at: http://tsgdemo.veplatform.com/uc/registration-short-form.php

Registration for the event itself is at : http://hpsolutionsforneweconomyvirtualevent.veplatform.com/?mcc=EMEA.

Tuesday, June 2, 2009

Mainframes provide fast-track access to private cloud benefits for enterprises, process ecosystems

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: CA.

Read a full transcript of the interview.

Enterprises are seeking cloud computing efficiency benefits, subsequent lower total costs, and a highly valued ability to better deliver flexible services that support agile business processes.

Turns out so-called private clouds, or those cloud computing models that enterprises deploy and/or control on-premises, have a lot in common with longstanding mainframe computing models and techniques. Back to the future, you might say.

New developments in mainframe automation and other technologies increasingly support the use of mainframes for delivering cloud-computing advantages -- and help accelerate the ability to solve recession-era computing challenges around cost, power, energy use and reliability.

More evidence of the alignment between mainframes, mainframe automation and management, and cloud computing comes with today's announcement that CA has purchased key assets of Cassatt Corp., maker of service level automation and service level agreement (SLA) management software.

I had the pleasure to recently learn more about how the mainframe is in many respects the cloud in a sponsored podcast interview with Chris O'Malley, executive vice president and general manager for CA's Mainframe Business Unit.

Here are some excerpts:
Gardner: What makes cloud so appealing and feasible right now?

O'Malley: Cloud as a concept is, in its most basic sense, virtualizing resources within the data center to gain that scale of efficiency and optimization. ... Physically there are many, many servers that support the ongoing operations of a business. CFOs and CEOs are starting to ask simple, but insightful, questions about why we need all these servers and to what degree these servers are being utilized.

When they get answers back and it's something like 15, 10, or 5 percent utilization, it begs for a solution to the problem to start bringing a scale of virtualization to optimize the overall data center to what has been done on the mainframe for years and years.

... It's about both the need from a business standpoint of trying to respond to reduced cost of computing and increased efficiency at a time when the technologies are becoming increasingly available to customers to manage distributed environments or open systems in a way similar to the mainframe.

Larger customers are using their mainframe in a highly virtualized way. They've been doing it for 30 years. It was the genesis of the platform. ... They try to get as much out of it as they possibly can. So, from its beginning, it was virtualized.

The viability of things like salesforce.com, CRM, and the need to coordinate that data with what for most customers is 80 percent of their mission-critical information residing on the mainframe is making people figure out how to fix those problems. It's making this cloud slowly, but pragmatically, come true and become a reality in helping to better support their businesses.

The distributed environment and the open-system environment, in terms of its genesis, was the reverse of what I described in the mainframe. The mainframe, at some point, I think in the early '90s, was considered to be too slow to evolve to meet the needs of business. You heard things like mounting backlog and that innovation wasn't coming to play.

In that frustration, departments wanted their server with their application to serve their needs. It created a significant base of islands, if you will, within the enterprise that led to these scenarios where people are running servers at 15, 10, or 5 percent utilization. That genesis has been the basic fiber of the way people think in most of these organizations.

This 15 or 10 percent utilization is what we consistently see, customer after customer after customer. ... You're seeing the pendulum come back. This is just getting too expensive, too complex, and too hard to keep up with business demands, which sounds a lot like what people's objections were about the mainframe 20 years ago. We're now seeing that maybe a centralized model is a better way to serve our needs.

Gardner: How does that relate to where the modern mainframe is?

O'Malley: The modern mainframe is effectively an on-demand engine. IBM has created now an infrastructure that, as your needs grow, turns on additional engines that are already housed in the box. With the z10, IBM has a platform that is effectively an in-house utility ... With the z10 and the ability to expand capacity on demand, it's very attractive for customers to handle these peaks, but not pay for it all year long.

... The mainframe has always been very good at resilience from a security standpoint. The attributes that make up that which is required for a mission-critical application are basically what make your brand. So, the mainframe has always been the home for those kinds of things. It will continue to be.

We're just making the economics better over time. The attributes that are professed or promised for the cloud on the distributed side are being realized today by many mainframe customers and are doing great work. It's not just a hope or a promise.

Gardner: There is some disconnect, though, cultural and even generational. A lot of the younger folks brought up with the Web, think of cloud applications as being Web applications.

O'Malley: Despite all these good things that I've said about the mainframe, there are still some nagging issues. The people who tend to work on them tend to be the same ones who worked on them 30 years ago. The technology that wraps it hasn't been updated to the more intuitive interfaces that you're talking about.

CA is taking a lead in re-engineering our toolset to look more like a Mac than it does like a green screen. We have a brand new strategy called Mainframe 2.0, which we introduced at CA World last year. We're showing initial deliverables of that technology here in May.

... Our first technology within Mainframe 2.0, is called the Mainframe Software Manager. It's effectively InstallShield for the mainframe. We developed that with 20-somethings. In our Prague data center, we recruited 120 students out of school and they developed that in Java on a mainframe. ... We have 25-year-old people in Prague that have written lines of code that, within the next 12 months, we'll be running at the top 1,000 companies on the face of the earth. There aren't a lot of jobs in life that present you that kind of opportunity.

... The mainframe technologically can do a lot, if not everything you can do on the distributed side, especially with what z/Linux offers. But, we've got to take what is a trillion dollars of investment that runs in the legacy virtual operating system environment and bring that up to 2009 and beyond.

... An open system has its virtues and has its limits. We're raising the abstract to the point where, in a collective cloud, you're just going to use what's best and right for the nature of work you're doing without really even knowing whether this is a mainframe application -- either in z/OS, or z/Linux -- or it's Linux on the open system side or HP-UX. That's where things are going. At that point, the cloud becomes true in the promise where it's being touted at the moment.

To be very honest, it's very important that we bring a cool factor to the mainframe to make it a platform that's equally compelling to any other. When you do that, you create some interesting dynamics to getting the next generation excited about it.
Read a full transcript of the interview.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: CA.