Monday, June 29, 2009

Oracle adds zest to SQL Developer with standalone data modeling tool, stirs the SQL market pot

Oracle has paved the way for developers to more easily build data models that create and update existing databases. The Oracle SQL Developer Data Modeler, which integrates with Oracle SQL Developer, arrived today as a standalone tool that supports logical, relational, multi-dimensional and data type modeling.

Oracle, Redwood Shores, Calif., had originally released a free version of the tool as an "early adopter" release. The full version is now available for $3,000 per named user. The new tool features multi-layered design and generation capabilities to produce conceptual entity relationship diagrams (ERDs) and transform them to relational models. Users can build, extend and modify a model as well as compare with existing designs.

The whole SQL databases and associated tools and modeling ecosystem is ripe for tumult. My best guess is that Oracle's pending Sun Microsystems purchase will provide offense via MySQL, and the associated community, to target the Microsoft SQL Server franchise.

Oracle can both keep tabs on the MySQL evolution while under-cutting Microsoft. Good work, if you can get it. Oh, and they can attract more middleware sales as they seduce the developers and deeply snare the operations folks.

On the other big future directon, to the cloud, modeling and managing data become the points of the arrow to attacting more sticky data into your cloud. We're ready seeing this in business process modeling as IBM is giving away such tools via BlueWorks. The enticement? To bring more process meta data and rules execution to Big Blue's cloud.

My expectation is that Oracle, HP, IBM, Red Hat, Amazon, Google, and Microsoft will begin to offer more "free" cloud-based enticements to enterprise developers and archirects that 1) hurt their competition whenever possible, and 2) solidify their respective advantages to create long-term cloud customers. Then repeat, extend, and solidify. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Remember when free and open source software began to disrupt the staus quo, and the large enterprise vendors could no longer ignore it? They played the same way. IBM, for example, embraced Linux (to hurt Microsoft and also sell more commodity hardware) and Apache web servers (ditto). But IBM did not open source DB2 or WebSphere.

We'll see the same picking and choosing -- tactical and strategic -- of what is "free" or not, cloud-based or not, rationalized on a similar pattern of combined offense and defense. The good news is that the enterprise architects and developers will have more good choices, lowering costs, and be able to play the beheamoths off of one another -- just like with open source.

Perhaps we need to call the cloud thing ... Any Source.

Back to Oracle and its maneuvers in the SQL space ... The capabilities of the new data modeler include:
  • Visual entity relationship modeling, which supports both Barker and Bachman notations so developers can switch between models to suit the audience’s needs or create and save different visual displays

  • Forwarding of engineering ERDs to relational models, transforming all rules and decisions made at the conceptual level to the relational model, where details are further refined and updated

  • Separate relational and physical models that enable users to develop a single relational model for different database versions or different databases.

  • A full spectrum of physical database definitions, supporting physical definitions such as partitions, roles, and tablespaces for specific database versions for multi-database, multi-vendor support
Oracle SQL Developer Data Modeler is generally available today and can be downloaded from the Oracle Technology Network (OTN).

Friday, June 26, 2009

IT Financial Management solutions provide visibility into total operations to reduce IT costs

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

The global economic downturn has accelerated the need to reduce total IT costs through better identification and elimination of wasteful operations and practices. At the same time, IT departments need to better create and implement streamlined processes for delivering new levels of productivity, along with reduced time to business value.

But you can't well fix what you can't well measure. And so the true cost -- and benefits -- of complex and often sprawling IT portfolios too often remain a mystery, shrouded by outdated and often manual IT tracking and inventory tasks.

New solutions have emerged, however, to quickly improve the financial performance of IT operations through automated measuring and monitoring of what goes on, and converting the information into standardized financial metrics. This helps IT organizations move toward an IT shared services approach, with more efficient charge-back and payment mechanisms.

Gaining real-time visibility into dynamic IT cost structures provides a powerful tool for reducing cost, while also maintaining and improving overall performance -- and perception of worth. Holistic visibility across an entire IT portfolio also develops the visual analytics that can help better probe for cost improvements and uncover waste -- and then easily share the analysis and decisions rationale with business leaders.

To better understand how improved financial management capabilities can help enterprise IT departments, I recently interviewed two executives from Hewlett-Packard Software and Solutions: Ken Cheney, director of product marketing for IT Financial Management, and John Wills, practice leader for the Business Intelligence Solutions Group.

Here are some excerpts:
Cheney: The landscape has changed in such a way that IT executives are being asked to be much more accountable about how they’re operating their business to drive down the cost of IT significantly. As such, they're having to put in place new processes and tools in order to effectively make those types of decisions. ... We can automate processes. We can drive the data that they need for effective decision-making. Then, there is also the will there in terms of the pressure to better control cost. IT spend these days composes about 2 to 12 percent of most organizations’ total revenue, a sizable component.

Wills: If all of your information is scattered around the IT organization and IT functions, and it’s difficult to get your arms around. You certainly can’t do a good job managing going forward. A lot of that has to do with being able to look back and to have historical data. Historical data is a prerequisite for knowing how to go forward and to look at a project’s cost and where you can optimize cost or take cost down and where you have risk in the organization. So, visibility is absolutely the key.

IT has spent probably the last 15 years taking tools and technologies out into the lines of business, helping people integrate their data, helping lines of business integrate their data, and answering business questions to help optimize, to capture more customers, reduce churn in certain industries, and to optimize cost. Now, it’s time for them to look inward and do that for themselves.

Cheney: IT operates in a very siloed manner, where the organization does not have a holistic view across all the activities. ... The reporting methods are growing up through these silos and, as such, the data tends to be worked within a manual process and tends to be error-prone. There's a tremendous amount of latency there.

The challenge for IT is how to develop a common set of processes that are driving data in a consistent manner that allows for effective control over the execution of the work going on in IT as well as the decision control, meaning the right kind of information that the executives can take action on.

Wills: When you look at any IT organization, you really see a lot of the cost is around people and around labor. But, then there is a set of physical assets -- servers, routers, all the physical assets that's involved in what IT does for the business. There is a financial component that cuts across both of those two major areas of spend. ... You have a functional part of the organization that manages the physical assets, a functional part that manages the people, manages the projects, and manages the operation. Each one of those has been maturing its capability operationally in terms of capturing their data over time.

Industry standards like the Information Technology Infrastructure Library (ITIL) have been driving IT organizations to mature. They have an opportunity, as they mature, to take advantage and take it to the next level of extracting that information, and then synthesizing it to make it more useful to drive and manage IT on an ongoing basis.

Cheney: IT traditionally has done a very good job communicating with the business in the language of IT. It can tell the business how much a server costs or how much a particular desktop costs. But it has a very difficult time putting the cost of IT in the language of the business -- being able to explain to the business the cost of a particular service that the business unit is consuming. ... In order to effectively asses the value of a particular business initiative, it’s important to know the actual cost of that particular initiative or process that they are supporting. IT needs to step up in order for you to be able to provide that information, so that the business as a whole can make better investment decisions.

Wills: One of the things that business intelligence (BI) can help with at this point is to identify the gaps in the data that’s being captured at an operational level and then tie that to the business decision that you want to make. ... BI comes along and says, "Well, gee, maybe you’re not capturing enough detailed information about business justification on future projects, on future maintenance activity, or on asset acquisition or the depreciation of assets." BI is going to help you collect that and then aggregate that into the answers to the central question that a CIO or senior IT management may ask.

Cheney: By doing so, IT organizations will, in effect, cut through a lot of the silo mentality, the manual error-prone processes, and they'll begin

Virtual computing, cloud computing, and some of these trends that we see really point towards the time being now for IT organizations to get their hands around cost at a detailed level and to have a process in place for capturing those cost.

operating much more as a business that will get actionable cost information. They can directly look at how they can contribute better to driving better business outcomes. So, the end goal is to provide that capability to let IT partner better with the business.

... The HP Financial Planning Analysis offering allows organizations to understand costs from a service-based perspective. We're providing a common extract transform load (ETL) capability, so that we can pull information from data sources. We can pull from our project portfolio management (PPM) product, our asset management product, but we also understand the customers are going to have other data sources out there.

They may have other PPM products they’ve deployed. They may have ERP tools that they're using. They may have Excel spreadsheets that they need to pull information from. We'll use the ETL capabilities to pull that information into a common data warehouse where we can then go through this process of allocating cost and doing the analytics.

Wills: We really want to formalize the way they're bringing cost data in from all of these Excel spreadsheets and Access databases that sit under somebody’s desk. Somebody keeps the monthly numbers in their own spreadsheets in a different department and they are spread around in all of these different systems. We really want to formalize that.

... Part of Financial Planning and Analysis is Cost Explorer, a very traditional BI capability in terms of visualizing data that’s applied to IT cost, while you search through the data and look at it from many different dimensions, color coding, looking at variants, and having this information pop out of you.

Cheney: [Looking to the future], in many respects, cloud computing, software as a service (SaaS), and virtualization all present great opportunities to effectively leverage capital. IT organizations really need to look at it through the lens of what the intended business objectives are and how they can best leverage the capital that they have available to invest.

Wills: Virtual computing, cloud computing, and some of these trends that we see really point towards the time being now for IT organizations to get their hands around cost at a detailed level and to have a process in place for capturing those cost. The world, going forward, obviously doesn’t get simpler. It only gets more complex. IT organizations are really looked at for using capital wisely. They're really looked at as the decision makers for where to allocate that capital, and some of it’s going to be outside the four walls.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Wednesday, June 24, 2009

In 'Everything as a Service' era, quality of services and processes grows paramount, says HP's Purohit

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

As services pervade how and what IT delivers, quality assurance early and often becomes the gatekeeper of success -- or the points of failure.

IT's job is evolving to make sure all services really work deep inside of business process -- regardless of their origins and sourcing. Quality of component services is therefore assurance of quality processes, and so the foundation of general business conduct and productivity.

Pervasive quality is no longer an option, especially as more uses of cloud-enabled services and so-called "fluid sourcing" approaches become the norm.

A large part of making quality endemic becomes organizational, of asserting quality in everything IT does, enforcing quality in everything IT's internal and external partners do. Success even now means quality in how the IT department itself is run and managed.

To better learn how service-enabled testing and quality-enabling methods of running IT differently become critical mainstays of IT success, last week at HP Software Universe in Las Vegas I interviewed Robin Purohit, vice president of Software Products at HP Software and Solutions.

Here are some excerpts:
Severe restrictions on IT budgets force you to rethink things. ... What are you really good at, and do you have the skills to do it? Where can you best leverage others outside, whether it’s for a particular service you want them to run for you or for help on doing a certain project for you? How do you make sure that you can do your job really well, and still support the needs of the business while you go and use those partners?

We believe flexible outsourcing is going to really take off, just like it did back in 2001, but this time you’ll have a variety of ways. We can procure those services over the wire on a rateable basis from whatever you want to call them -- cloud providers, software-as-a-service (SaaS) providers, whatever. IT's job will be to make sure all that stuff works inside the business process and services they’re responsible for.

If you think of it as marketplace of services that you're doing internally with maybe many outsource providers, making sure every one of those folks is doing their job well and that it comes together some way, means that you have to have quality in everything you do, quality in everything your partners do, and quality in the end process. Things like service-enabled testing, rather than service-oriented architecture (SOA), is going to become a critical mainstream attribute of quality assurance.

... What IT governance or cloud governance is going to be about is to make sure that you have a clear view of what your expectations are on both sides. Then, you have an automatic way of measuring it and tracking against it, so you can course correct or make a decision to either bring it back internally or go to another cloud provider. That’s going to be the great thing about the cloud paradigm -- you’ll have a choice of moving from one outsource provider to another.

The most important things to get right are the organizational dynamics. As you put in governance, you bring in outside parties -- maybe you’re doing things like cloud capabilities -- you're going to get resistance. You’ve got to train your team to how to embrace those things in the right way.

What we’re trying to do at HP is step up and bring advisory services to the table across everything that we do to help people think about

It’s all about allowing the CEO and their staffs to plan and strategize, construct and deliver, and operate services for the business in a co-ordinated fashion, and link all the decisions to business needs and checkpoints

how they should approach this in their organization, and where they can leverage potentially industry-best practices on the process side, to accelerate the ability for them to get the value out of some of these new initiatives that they are partaking in.

For the last 20 years, IT organizations have been building enterprise resource planning (ERP) systems and business intelligence (BI) systems that help you run the business. Now, wouldn’t it be great if there were a suite of software to run the business of IT?

It’s all about allowing the CEO and their staffs to plan and strategize, construct and deliver, and operate services for the business in a co-ordinated fashion, and link all the decisions to business needs and checkpoints. They make sure that what they do is actually what the business wanted them to do, and, by the way, that they are spending the right money on the right business priorities. We call that the service life cycle.

... There are things that we're doing with Accenture, for example, in helping on the strategy planning side, whether it’s for IT financial management or data-center transformation. We're doing things with VMware to provide the enabling glue for this data center of the future, where things are going to be very dynamically moving around to provide the best quality of service at the best cost.

... But, users want one plan. They don’t want seven plans. If there’s one thing they’re asking us to do more, faster, better, and with all of those ecosystem providers is to show them how they can get from their current state to that ideal future state, and do it in a coherent way.

There's no margin for error anymore.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

HP adds new consulting services to smooth the enterprise path to cloud adoption

Hewlett-Packard (HP), which already has an array of services to help customers chart their way through the challenges and opportunities of cloud computing, loaded two more arrows into its quiver yesterday with the announcement of HP Cloud Discovery Workshop and HP Cloud Roadmap Service.

The Palo Alto, Calif-based global IT giant is aiming the services at enterprise customers who are looking to efficiently drive business benefits from cloud. The goal, according to HP, is to help customers source, secure, and govern cloud services as an integral part of their IT strategy. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The HP Cloud Discovery Workshop is designed to help enterprise IT organizations learn about the cloud as a strategic service delivery option and how to leverage it as part of a broader IT service delivery strategy. Available in July, the workshop is designed to:
  • Educate customers on the cloud and multi-sourcing service delivery strategies

  • Outline benefits, risks and implications of the cloud within the business

  • Provide recommendations on people, process and technology for using the cloud as part of an IT and business strategy.
The HP Cloud Roadmap Service, a follow-on to the HP Cloud Discovery Workshop, offers customers a service for planning and adopting cloud as part of their service delivery strategy. The service will:
  • Provide specific recommendations for how customers should use cloud delivery and deployment models as part of their service delivery strategy

  • Recommend the right service strategy, governance and program model

  • Deliver a roadmap for cloud adoption, including a set of recommendations and benefits for what customers can achieve in practical and incremental steps
The array of cloud services currently offered to HP customers includes:
One of HP's newest offerings, prior to today's announcement, was Cloud Assure, which consists of HP services and software, including HP Application Security Center, HP Performance Center and HP Business Availability Center, and is delivered to customers via HP Software as a Service.

Cloud and upgraded computing future brightens despite overcast economy, Microsoft-sponsored survey finds

Even in this global recession, one-third of IT organizations plan to increase virtualization, including cloud-computing initiatives, in the next two years, according to a survey conducted by Harris Interactive and sponsored by Microsoft.

Seeing the downturn as an opportunity to upgrade, nearly two-thirds of 1,200 IT professionals surveyed in the U.S., U.K., Germany, and Japan plan to invest in new infrastructure technology, according to results released this week.

While news in the past year focused on budget cuts for IT, 98 percent of those surveyed by Harris are planning either to maintain or increase investment in infrastructure technologies. Top priorities for those taking the bold approach of investing and innovating despite tight budgetary times, include:
  • 42% plan increased investment in virtualization.

  • 36% plan increased investment in security.

  • 24% plan increased investment in systems management.

  • 16% plan increased investment in cloud computing.
Not surprisingly, keeping the lights on in the glass house is a bigger priority than innovation.

When the question comes down to investing in innovation versus “keeping the lights on,” the IT pros in the four countries surveyed responded that 37 percent of their budget is going to innovation while 63 percent is going to keep the lights on.

Innovation is less of a priority in the U.S. than it is in the other countries. When the percentages are broken out in the Harris report:
  • US: 29% innovation, 71% “keeping the lights on.”

  • UK: 41% innovation, 59% “keeping the lights on.”

  • Japan: 41% innovation, 59% “keeping the lights on.”

  • Germany: 35% innovation, 65% “keeping the lights on.”
These percentages may reflect the fact that U.S. IT had been hit harder by the recession than those in the other countries, said Bob Kelly, corporate vice president of infrastructure server marketing at Microsoft. But the overall percentages show a slight trend to innovation, he said during a teleconference highlight the survey results.

“The ratio of 63 to 37 percent is actually slightly changed,” Kelly said. “About two years ago when we did similar research we saw that it was 70/30. So really, in this downturn we’re seeing an increased focus on innovation.”

Further he said the current research indicates that companies are falling into two categories when it comes to dealing with the recession. One group is retrenching and just holding on to their existing IT infrastructure while waiting for the recovery. The second group actually views IT innovation strategically as a way to pull out of the recession.

“They’re seeing this as their strategic opportunity to really make hay and move their business forward to accelerate out the other side of this economic downturn,” Kelly said.

The survey confirmed Microsoft’s in-house belief that IT budgets still have room for investment in infrastructure innovations, he said. The Redmond folks hope that will include convincing corporate IT departments, which pretty much skipped the Vista era, to finally move from Windows XP to Windows 7.

More survey highlights are available at the Microsoft Core Infrastructure Optimization site.

BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.

Tuesday, June 23, 2009

Web data gains some due respect as Kapow eases it into mission critical enterprise uses

In this Web 2.0 world, enterprises increasingly need data from public websites, including news sources such as CNN and even social networking sites such as Facebook, for integration into business intelligence (BI) and service-oriented and web-oriented architecture (SOA/WOA) applications.

Kapow Technologies
, which provides tools designed to speed finding, downloading, cleaning, and integrating data and content from the web, is releasing a new version of Kapow Web Data Server (formerly the Kapow Mashup Server) today. The new version includes a handy new “URL Blocking” feature that screens out web junk, such as banner ads, insuring that only data needed for the application in being downloaded. [Disclosure: Kapow Technologies is a sponsor of BriefingsDirect podcasts.]

Recently Stefan Andreasen, founder and CTO of Palo Alto, Calif.-based Kapow, demonstrated his company's value around managing data services quickly, without hand coding. At the Web 2.0 Expo in April, he demonstrated a, iPhone mashup application created using Kapow tools and IBM Rational EGL as an example of the conference's "Power of Less" theme.

“Traditionally, it would have taken at least three months and significant IT resources to create and integrate a web data source and serve it to a mobile device," Andreasen explained prior to the demo, "but today, through rapid application development technology from Kapow Technologies and IBM, two developers spent a total of three hours creating a dynamic personalized web application for the iPhone."

Kapow boasts that the Web Data Server 7.0 is “the industry’s only platform that can access, enrich and serve web data with complete assurance ― 100 percent of data, 100 percent of the time.”

The value is more than for convenience. More than ever, web-based content plays an essential role in many business processes and analytical presentations. Doing operational and business ecology business intelligence (BI) requires fast and easy integration of web-based content and data assets.

With Kapow's patented visual development and Web data automation platform customers can gain data access to any intranet or extranet business application, as well as any website or application on the web, the company says. This cuts out manual approaches, now quite common.

Rapid data access is vital for today's agile application development, like mobile, WOA and other types of agile business applications, Andreasen says. Regardless of whether

. . . today, through rapid application development technology from Kapow Technologies and IBM, two developers spent a total of three hours creating a dynamic personalized Web application for the iPhone.

or not developers have programmatic access via an application programming interface (API), Kapow provides easy access to enterprise and public web data, then extracts and transforms it into a standard web service or data feed, he explains.

A key element in the data server are the Kapow robots that the company says “use standard web protocols and security mechanisms to automate the navigation and interaction with any web application or website, providing secure and reliable access to the underlying data and business logic.”

Offering an example of an application built with its technologies, the company points to a hypothetical sales app providing “a full 360-degree view of prospects and customers by automatically extracting data from internal customer relationship management (CRM) systems, subscription data feeds such as Edgar Online, corporate sites, blogs and social media sites including LinkedIn, Technorati and Facebook.”

New features in the Kapow Web Data Server 7.0 version include:
  • "100 Percent Browser Engine Compliance," which handles complex web data sources, including JavaScript and AJAX intensive Websites.

  • Intuitive point-and-click integrated development environment (IDE) for “surgical data extraction accuracy with no coding.”

  • Scalability improvements offering real-time performance optimization and the ability to download large file downloads directly to disk for enterprise scale projects

  • Browser-Based Scheduler, which provides automation of data refresh and synchronization schedules.

  • Authentication for RoboServer, which provides “seamless integration with existing enterprise security and authentication systems.”
Availability and Pricing

Further information and pricing is availabile at http://kapowtech.com/index.php/products/overview.

BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.

SaaS delivery of IT lifecycle and quality management functions evolves toward an IT service-delivery solutions approach

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

When people think of Software as a Service (SaaS) and web services delivery, they often envision business applications like salesforce automation, email, and human resources management.

But Hewlett-Packard has been delivering quality assurance and applications performance management functions via SaaS for years. It's Business Technology Optimization (BTO) services, part of its Mercury acquisition, made the leap to SaaS delivery long before web-based business applications became popular. You could say SaaS for developers and testers -- code warriors -- paved the way for SaaS for salesmen -- road warriors.

Now, as interest in cloud computing ramps up, the ability to deliver more aspects of IT lifecycle and quality management, along with broader project and portfolio oversight values, is also ramping up. Yet a missing ingredient for IT innovators has been how to begin and how to organize these sourcing changes effectively.

Such a SaaS whole greater than the sum of its web services parts will better help IT managers do more with less, and provide better applications faster, as well.

To better understand the expanding role of SaaS within IT, and how professional services can newly help in the transition to holistic SaaS use by IT departments, I interviewed two executives last week at HP's Software Universe conference in Las Vegas: Scott Kupor, vice president and general manager of Software-as-a-Service, and Anand Eswaran, vice president of professional services, both in HP's Software and Solutions group.

Here are some excerpts:
Kupor: At HP for the last nine years, we've been selling IT management applications as a service delivery option. If you think about things like testing, performance management, or project and portfolio management (PPM), for example, those are traditional IT applications that we’ve been selling with this similar delivery model.

What we’ve been hearing from customers today at the conference are two key things. Number one, the cost benefits that initially drove them to SaaS are ever present and incredibly more important in this financial environment. The benefits are really coming to fruition. The second is that we’re starting to see a migration of SaaS from what was traditionally testing services toward other more complex and more customizable IT management applications.

We’re hearing a lot of interest from customers around IT service management (ITSM), service desk applications, and service management applications. These are things that have traditionally been the domain of inside-the-firewall deployments. Customers are now getting comfortable with the SaaS model so much so that they’re looking at those applications as well for deployment in a SaaS environment.

Eswaran: We’ve made a very conscious shift from what was inherently deployment of products. The approach right now is transformed into what business outcomes can we achieve for the customer, which is something which we would have been unable to do some time back.

We have changed focus now from deploying a single product set to achieving outcomes like reduction of outages by 40 percent, increasing quality, getting service-level agreements to a certain point, and guaranteeing that level of service. That’s been hugely helpful.

All of what we do at the back end, whether it’s how we leverage SaaS, what products we use, what software we use, what consulting and professional services we use, all of that is going to be transparent to the customer. What they care about is a service, which we will deliver to the customer. SaaS enables us to get to that service, get to that time-to-market, much faster.

This all gets us to the point of what customers refer to as "killing the game," getting to a point of being able to offer outcome-based pricing and guaranteeing that outcome, as opposed to the traditional consulting model of billing rates and hours.

Kupor: Remember, all these are complex IT management applications, they have third-party integrations.

That’s really what IT’s job is -- to help deploy business applications and govern the integrity, security, the authenticity, and the performance of those applications.

They have custom code that customers are building on top. Those are all areas of domains of expertise for the services organization. Through the work that [Anand's team] and we are doing together, we can together deliver a cost-effective delivery option for customers, but without having to sacrifice the complexity, integration, and customization opportunities that they demand for these applications.

We’ve heard this a lot from our customers today, that they’re actually interested in looking at how, as an IT department, they can deploy their own applications in a third-party cloud environment. You hear a lot of people talking about infrastructure on demand or computing power on demand.

People are looking toward these third-party products as a way to basically take an application they’ve built in-house and deploy them externally in, perhaps, an Amazon environment or a Microsoft environment. Where the interesting opportunity is for us, as a management vendor, is that customers will still need the same level of performance, availability, security, and data integrity, associated with applications that live in a cloud environment as they have come to expect for applications that live inside their corporate firewall.

We’ve been talking to customers a lot about something called Cloud Assure, which is the first service offering that HP has brought to market to help customers solve those management problems for applications they choose to deploy in a cloud-based environment.

That’s really what IT’s job is -- to help deploy business applications and govern the integrity, security, the authenticity, and the performance of those applications.

Eswaran: Everything is eventually going to get transformed into a service for the customer, so that they can actually focus on the core business they are in. When you have things transformed into a service, everything we do to offer that service should be transparent to the customer.

It becomes a services-led engagement, but that’s where we clearly differentiate "services" from "service," the singular, which is the eventual outcome the customer needs to create for themselves. That’s why we really partner well between SaaS and Professional Services. We believe that we are on a path of convergence to eventually get to offering business value and a service to a customer.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Platform applies HPC lessons to 'private' cloud creation, operations, efficiency

More enterprises are looking to the cloud compute model -- both public and private -- to efficiently support myriad applications and data workloads. Platform Computing, a pioneer in high-performance computing (HPC), is now jumping into the fray with a private cloud management platform: Platform ISF.

Platform ISF, which becomes the centerpiece of the company's cloud computing strategy, creates a shared IT infrastructure from physical and virtual resource pools, to deliver application hosting environments, according to automated workload and resource scheduling policies. The Markham, Ont. company said its new offering will be released in beta this week, with general availability planned for the fall.

Platform ISF leverages Platform’s resource sharing technology, EGO, and its virtual machine orchestrator (VMO), combine to deliver an infrastructure-sharing platform. Platform has also built in additional capabilities for self-service, reporting and billing -- helping to make clouds a bill-as-you-go affair (a fringe benefit of IT shared services). This is also expected to drastically reduce the costs of IT, as resource utilization levels increase thanks to resource sharing.

Platform ISF is a technology-agnostic cloud computing management platform that supports any collection of hardware, operating systems and virtual machines, said Songnian Zhou, CEO, chairman and co-founder. This allows organizations to leverage existing resources and corporate standards, as they build and deploy private clouds.

Platform's private cloud software, the elevate of its grid capabilities, allows implementers to access IT infrastructure via portals using visual interfaces, or programmatically via Java, web services, .NET and other popular frameworks, Zhou told me last week in a briefing. Platform ISF offers a "meta template" of workload support environments, allowing for flexible requests for resources, all of which can be charged back in granular fashion to the actual consumers of the IT resource services.

While third-party "public" clouds can offer raw infrastructure and computer resources on a pay-per-use basis, most enterprises will probably use a combination, or hybrid, of both internal and public cloud resources. Platform ISF acts as the management layer for pulling such disparate resources into a unified environment and is independent of location or ownership of resources.

And, Platform ISF, is governance agnostic, allowing for third-party governance to additionally manage how such cloud services are used, provisioned and automated -- to an IT departments requirements.

While Platform has been around for a long time, they're hardly become a household word. This may not be a bad thing, according to Derrick Harris at GigaOm:
Of course, Platform is no IT behemoth, which also could work in its favor. While they might consist of useful pieces, cloud offerings from companies like IBM, Microsoft and HP can be difficult to grasp. They can involve an array of systems management tools, servers and other products that leave customers dizzy — and potentially locked in.
Jon Brodkin, writing at The Industry Standard, quotes Forrester analyst James Staten, who enumerates other players in the cloud management field -- 3tera, Elastra, Enomaly, and Zimory, and the open source Eucalyptus -- but says that all of them, unlike Platform lack at least one of the elements necessary to build a cloud.

I was impressed with Platform's heritage of providing HPC grid services for 15 years as a precursor to cloud street cred. Platform's approach can be used by enterprise IT departments to move to cloud benefits, on their terms, rather than the fantasy notion of cloud being best approached without IT.

As Zhou says, "Cloud is built, not bought." I couldn't agree more.

Expect Platform ISF to be used on business intelligence workloads early on, with J2EE, and PaaS to follow close on. Oh, and we ought to expect more HPC loads and requirements to be make via public-private tag-team clouds too.

Compuware spruces up IT portfolio management with Changepoint refresh

By David A. Kelly and Heather Ashton

This guest post comes courtesy of David A. Kelly, principal analyst and Heather Ashton, senior analyst, at Upside Research. You can reach them here.


It’s hard to improve if you don’t have a way to measure how you’re doing. That’s one of the reasons why IT portfolio management solutions have been started to generate a lot of interest over the past few years. IT portfolio management solutions help organizations manage IT costs, make better IT funding decisions and help align business and IT objectives.

At the Project Portfolio Management Summit in California on June 15, Compuware unveiled a juiced-up version of its IT portfolio management solution, Changepoint, identifying agile development and delivery as key components of increasing value to customers over the next 12 likely-recessionary months. [Disclosure: Compuware is a sponsor of BriefingsDirect podcasts.]

As a business-centric IT management solution, Changepoint (get a free Upside Research report) is designed to help IT and business managers gain better visibility into the enterprise IT environment. As most enterprise IT departments have experienced, the investment lifecycle decision-making process for IT has historically been a pain point. In most instances, IT departments have been plagued with either over-allocating or under-allocating their funds. Changepoint is designed to provide executive-level visibility into IT spending, building trust between IT and management. The reality of shrinking IT budgets makes this visibility a necessity as organizations seek to optimize operational demands for IT resources.

In the face of the “new economy,” IT portfolio management solutions are becoming a necessary tool for IT to meet today’s economic challenges. Compuware is hoping that the new features it has added to Changepoint will increase usability and end-user adoption. Among the deliverables that Compuware announced in its year-long-roadmap for Changepoint are added managed services to assist with optimizing Changepoint ROI; bundling in Vantage (Compuware’s IT service management solution) to monitor usage and ensure adoption of Changepoint; and leveraging industry-standard middleware to facilitate integration to financial, HR and help desk applications.

The first deliverable is the Agile Accelerator, designed to deliver best practices for managing agile software development projects. Compuware is tapping into the movement by IT departments to use agile development and delivery to improve responsiveness to the business.

Not to rock the boat too much, and to reassure those IT development groups that prefer to stick to more traditional waterfall type projects, Changepoint will continue to support

The end result is the ability for an IT department running some agile projects to manage those projects within the broader scope of the overall project portfolio.

existing methodologies while also encouraging new approaches such as agile delivery to speed time-to-market, a critical component of achieving ROI on IT projects.

The end result is the ability for an IT department running some agile projects to manage those projects within the broader scope of the overall project portfolio.

While recent economic conditions make it difficult for some IT organizations to invest in new technologies at this point, it’s always worth it to step back and evaluate the decision-making process around IT investments and the potential value that IT portfolio management solutions might bring.

Organizations with existing and effective application and IT metrics or a limited number of projects or applications may not find enough value to warrant IT portfolio management solutions. But any organization managing numerous projects, dynamic business environments, limited investment resources or the need for more effective and efficient decision-making processes may find significant value in portfolio management.

This guest post comes courtesy of David A. Kelly and Heather Ashton at Upside Research. You can reach them here.

Monday, June 22, 2009

Eclipse plug-in puts TOGAF 9 into IDE collaboration mode for architects

The Open Group, a technology-neutral consortium, today released an Eclipse plug-in that puts TOGAF 9 capabilities literally at your fingertips. The TOGAF Customizer was donated to The Open Group by Capgemini.

Based on the Eclipse Process Framework (EPF), an open-source project managed by the Eclipse Foundation, the TOGAF Customizer can be used to implement TOGAF 9 more easily. TOGAF is an industry-consensus framework and method for enterprise architecture (EA) developed by The Open Group, and released in February. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

The new customizer contains all the content of TOGAF 9 in a structured and editable form, including guidelines, concepts, and checklists, as well as detailed work breakdown structures for the framework’s new and improved architecture development method (ADM).

In a nutshell, moving TOGAF into an industry-standard IDE brings a Web 2.0 flavor to the document, making it akin to a wiki. What's more, collaborating via an IDE's built-in communications and sharing attributes -- as well as version management -- can make TOGAF more into a "living" document, and eases innovation and ongoing improvement.

With the new tool, users can align their EA practices with TOGAF 9 and create organization-specific versions of the standard that represent the concerns of their unique business and technology environments. All goes into and out of a common repository. In addition, the new tool makes it much easier for enterprise architects to integrate TOGAF with other common EA frameworks, such as Zachman, FEAF and DoDAF.

Key features and benefits of the TOGAF Customizer include:
  • Specific constructs for tasks and steps enable processes to be formally defined with related content, such as inputs, outputs, roles and responsibilities

  • Supporting editor allows users to make changes to the standard TOGAF framework content and tailor it to their specific organizational context

  • Underlying content management system supports group collaboration, editing and versioning

  • Plug-in architecture allows new content packages, including document templates, to be created and linked to TOGAF
The new plug-in is available for download from: www.opengroup.org/togaf/.

Many architects are familar with the development lifecycle, and many developers have designs on becoming archiects, so the melding of two essential IT fucntions on a common pallette, so to speak, makes a great deal of sense.

I can hardly wait for what we've seen so far with Google Wave to come into prime time. Combining what Google Wave, the Eclipse IDE and TOGAF 9 does will make for a powerfully productive future.

And, of course, we should never under estimate the power of the community effect. I expect we'll see quite a bit of novel innovation from how users leverage and expand on what the framework in an IDE value only begins with.

HP's Andy Isherwood on running IT like a business, with an eye to transforming IT's role

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

In many companies, IT departments remain in an isolated functional silo, often not reporting to the CEO, and often unfortunately disconnected from the main business imperatives.

Now, the combination the down economy, tight IT budgets, and the advent of more cloud sourcing and data center architecture options offer two paths to IT leaders: Remain on the alienated edge, or move to center-stage in how businesses adapt to their changing markets.

HP at its Software Universe conference last week offered a path that helps unify people, process and product into a roadmap for how to transform IT, and therefore to better help transform the business -- while keeping costs down.

To more deeply understand the transformative challenges facing IT and business leaders alike, I interviewed Andy Isherwood, vice president and general manager of HP Software and Solutions.

Here are some excerpts:
All the conversations I've had with CIOs are that the capital expenditure is typically being reduced by anything between 0 and 40 percent, and operating expenditures being decreased by up to 10 percent. It's less, but still pretty significant.

So you’ve ended up with a significantly smaller budget to do stuff, which can cause big problems for organizations. They have a certain amount of infrastructure in day-to-day activities to maintain. This means that they have to spend all their budget on existing projects and keeping the lights on, rather than any innovation. If you can’t innovate, then you can’t deliver value back to the business and you become just an IT function delivering the core value.

So, how do we innovate and how do we use the budget more effectively than we do today to allow us not just to keep the lights on, but to do this huge amount of innovation?

If we don’t do it now, we won’t be able to do it in the future, because, as demand picks up, it’s just going to be "all hands to the pump" to be able to deliver just the demand that picks up, as we come out of the recession.

The financial situation at the moment is driving a more intense look at those sourcing options and what it does from a financial point of view for that particular organization. ... SaaS is a great offering. We’ve been in that business for nine years and we have 700 customers. So, we know that business well. We know that in times, in which capital expenditure is being restrained, they can move to a more operating expense-oriented budget, but still be able to innovate, which is a pretty compelling proposition. As we move through, and capital expenditure is freed up, that might change, but at least people have the option.

Whether it’s insourced, outsourced, a partner activity, whether it's on premise or off premise, all of these options give people choices. From an HP standpoint, we have the ability to give people the choice. Our recent acquisition of EDS clearly adds the last pillar of choice, given that we have now an outsourcing business, which is significant.

People have a lot of choice, but they quite often find it difficult to make a decision on the best choice. Other people feel that the choice gives them a lot more scope to do things differently, to manage budgets in a different way, and do things more effectively.

The management of all of these sourcing options is a key consideration. Take the example of an organization putting things onto a public cloud.

What I'm hearing from customers is that they want advice on what should they insource, what should they outsource, what should they put in the cloud, and what should they have as a SaaS offering.


They’re still going to have the same requirements from a governance and management standpoint, but it might be a lot harder than having it in-house.

Management requirements on governance around what data is out there, what performance is like, and what scalability is like, are all considerations and discussions that we help with. It can make the whole world a lot more complex for CIOs. Therefore, the management capability that we have around all of those options becomes even more important.

We’re finding that people want advice around the choices. ... What I'm hearing from customers is that they want advice on what should they insource, what should they outsource, what should they put in the cloud, and what should they have as a SaaS offering.

That’s a really important job and an important role for someone like an HP, which actually doesn’t have a bias, because we've got all the options. If we were only a cloud computing or any outsourcing company, we’d be giving customers one option. Our role as a consultant to not only evaluate what is best for those organizations, but what is good for them financially, is a very important part of the role HP can play and should play.

[The solution] becomes more of a management of the service, than management of the infrastructure that develops or delivers the service. So, our role is about, governance, management, and control of the services that are delivered to an organization, rather than the product, power, or the storage that’s delivered to a company.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Friday, June 19, 2009

EDS's David Gee on the spectrum of cloud and outsourcing options unfolding before IT architects

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: HP.

Read a full transcript of the discussion.

HP's purchase last year of EDS came just as talk of cloud computing options ramped up. So how does long-time outsourcing pioneer EDS fit into a new cloud ecology?

Is EDS, in fact, a cloud provider? And how will IT departments properly factor their decisions on what to keep on-premises in data centers versus placing assets and workloads on someone else's cloud infrastructure?

We pose these and other "fluid sourcing" future questions to David Gee, Vice President of Marketing at EDS, in an interview by me, BriefingsDirect's Dana Gardner. It comes as part of a special BriefingsDirect podcast series from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas this week.

Here are some excerpts:
One of the fastest ways to ... free up more of your IT spend and spend less on maintenance to drive a transformation or innovation ... is to flip the knob between capital expenditure and operating expenditure and to look at a third party or an outsourcer for some help and guidance.
"[For IT spending] 'flat' is the new 'up,' in terms of what the opportunities are. We're also seeing a recognition that six months is the new 12. How do you get to a faster return on investment (ROI)? Don’t show up with a project that has a 12-, 24-, or 36-month timeframe.
One of the things we hear people at Software Universe talking about is performance and quality testing, and do you need all the resources in-house to be able to do that? Or, if you have peak load, why don’t you use a third party to help you do performance, quality, and security testing and, from a software standpoint, maybe even do that in the cloud. You can either use a third party or have it delivered as a service to you inside of your infrastructure.

In my mind we’re a cloud provider. EDS created the outsourcing industry over 40 years ago. Think about everything that we do today in delivering services to our client base. If you then extend that, those services are effectively cloud-based services, depending on what your definition is. In my mind, we’re absolutely a cloud company.

We’re at the forefront of delivering that in multiple countries, across multiple industries and in some cases, highly mission-critical services for airlines and financial institutions. Do they have a consumer orientation to them? Probably not. In fact, you may not even realize that we're doing that behind the scenes for some of the most well-known brands on the planet.

Cloud means a lot of things to different people. Right now, the objective, particularly for large enterprises, is to experiment to understand what the implications are.

Architecturally, it’s very different, particularly as enterprises want to offer services to their end customers. Equally, how does an enterprise deal with or adopt private cloud infrastructure to be able to offer Web services in an architecturally sound, distributed, and scalable way?

First, we can help in a number of different ways from a consulting standpoint, in terms of how to architect around those things. Second, we can build them for our clients and we do that already today in terms of private cloud infrastructure. And, third is to provide maybe just core infrastructure to third parties, and they then build their clouds to offer to the marketplace overall.

My experience thus far has been that clients are looking for leadership, some direction, and flexibility. Certain things I absolutely want to control and retain within my own firewall. Certain things I'm going to want EDS to help me manage, host, drive down operational cost, and provide some level of innovation -- and to deliver those services as effectively private cloud services to my client base and ultimately to their customers as well.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: HP.

Winning the quality war: HP customers offer case studies on managing application performance

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: HP.

Read a full transcript of the discussion.

Quality early in application development sounds nice, but actually making it happen brings significant cost savings, repeatable quality assurance processes, higher user satisfaction, and shorter development cycles. The results reward developers, end users, and IT operators alike.

To better understand the journey to quality assurance for new applications -- and the processes that work best -- BriefingsDirect interviewed IT executives at FICO, Gevity and JetBlue in a podcast discussion moderated by me, Dana Gardner. It comes as part of a special BriefingsDirect podcast series from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas this week.

Listen as we hear from Matt Dixon, senior manager of tools and processes at FICO; Vito Melfi, vice president of IT operations at Gevity, a part of TriNet, and HP Award of Excellence winner Sagi Varghese, manger of quality assurance at JetBlue.

Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: HP.

HP Software marketing head Anton Knolmar delves into creating new IT economies of performance

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: HP.

Read a full transcript of the discussion.

IT departments are nowadays having to do more with less, gaining additional productivity while spending less money. It sounds simple, but making it happen is very complex.

How do IT departments and companies approach this problem? How will cloud computing and "fluid sourcing" options help or hinder the process? And how can IT budgets slide while expectations rise that new architectural approaches can be adopted with low risk?

To probe deeper into the harsh new IT economies of performance can be managed, BriefingsDirect sat down with Anton Knolmar, Vice President of Marketing for HP Software & Solutions, for a discussion moderated by me, Dana Gardner. It comes as part of a special BriefingsDirect podcast series from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas this week.

Here are some excerpts:
We've just come out of an executive track. We had about 70 people gathered for the discussion. What is at the top of their minds is all about linking IT with the business. This is a story that we've been telling now for more than 10 or 15 years, and the storyline is not over.
They’re still trying to bridge the gap and talk business language, instead of IT language. One the other hand, they're trying as well to look at the emerging trends. What the heck does this cloud means for them? How can you do cloud computing here? Does this bring added value to them? What’s the business outcome they can drive out of those activities?

What companies are facing at the moment is that a lot of these activities that were going on in the past -- utility computing, Adaptive Enterprise, eServices -- failed because they couldn’t be managed, but it was out there on the Web, on the Internet.

Our offerings around the cloud at the moment are governance tools along with the cloud. You can really manage the cloud. You can really secure the cloud. And, you can get the right performance out of the cloud. That’s our offering at the moment to our customers. They can take the first step, get this one right, and move into the cloud environment.

Mitigation of risk will never go away. At the moment, everyone is talking about reduction of costs, but there is always a risk factor attached to it. Hopefully, the outcome will be that a lot of companies can talk about their revenue growth again, moving from 2009 into 2010.

We are ready to drive those three angles. How we can help customers drive revenue growth? How we can help them mitigate the risk? And, on the other side, how can we help them get their costs under control? These are the three angles will be on the table for quite some time.

The developer community, as you said, has different concerns in terms of developing the applications and developing things for the cloud as well. Our approach at this time is that we enable them to have the appropriate developing and testing tools in terms of quality, performance, and security. These are essentially for those people who have to develop applications well for the cloud. Those are blocked in immediately, are ready to go out there, and can be managed across the lifecycle.

Getting the right information at the right place and making the appropriate decisions are still on top of the agenda for lot of our customers at the moment. It’s been the number one issue for quite some time, and I think it will be the number one issue for quite some time.

We have an offering in these four lines of business in HP Software and Solutions. One is, you gather around the Business Intelligence piece. What we are investigating at the moment is really about how can we bring those offerings as more of a direct offering to our customers in terms of purchasing and licensing? How can you bring those offering into kind of a cloud offering?

But, that still needs some further negotiations inside the company, as well, about development products. But that’s definitely an interesting angle.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: HP.

Who's Architecting the Cloud?

By Ron Schmelzer

This guest post comes courtesy of ZapThink. Ron Schmelzer is a senior analyst at ZapThink. You can reach him here.

As the hype cycle for the cloud computing continues to gather steam, an increasing number of end users are starting to see the silver lining, while others are simply lost in the fog. It is clear that the debate over the definition, business model, and benefits of cloud will continue for some time, but it is also clear that the sluggish economic environment is increasing the appeal of having someone else pay for the robust infrastructure needed to run one’s applications. Yet, all this talk of leveraging cloud capabilities, or perhaps even building one’s own cloud, whether for public or private consumption, introduces thorny problems. How can we make sure that the cloud will bring us closer to the heavenly vision of IT we search for rather than a fog that hides a complex mess? Who will make sure that the cloud vision isn’t just another reinterpretation of the Software-as-a-Service (SaaS), Application Service Provider (ASP), grid and utility computing model that provided some technical answers but didn’t simplify anything for the internal organization? Who is architecting this mess?

Architecture and the Utility Services Cloud

Most of the time, when people point to practical, in-production examples of cloud computing efforts, they are talking about the sorts of utility services offered by Amazon.com, Google, Salesforce.com, and others. The Services offered in these clouds are not built with any particular application in mind, but rather whole categories of applications. For obvious reasons, these cloud providers seek to leverage economies of scale by serving the largest possible audience using a handful of highly reusable Services, where reuse is defined by usage in multiple contexts. For these cloud providers, the utility Services simultaneously provide a source of revenue as well as a platform their customers use to replace proprietary, in-house infrastructure and middleware.

Given that the emphasis of these Services is to meet the needs of a large and

For obvious reasons, these cloud providers seek to leverage economies of scale by serving the largest possible audience using a handful of highly reusable Services, where reuse is defined by usage in multiple contexts.


continuously growing audience who have diverse requirements, the utility cloud provider’s primary focus is placed on infrastructural concerns. As a result, it’s the infrastructure technologists who are in charge of this cloud. When the “architecture team” meets in these cloud providers, what problems are they aiming to solve? Business problems? Certainly not. In most cases, the architecture teams for these providers (of which we’ve been privy to a number of conversations), focus almost exclusively on technology and infrastructural concerns. Key conversations revolve around performance optimization, implementation change management, optimizing the balance between efficiency and cost, meeting reliability and uptime concerns, and addressing privacy, security, and governance issues.

Where’s the business in all this? The answer: nowhere. Where should the business be in all this? That’s a tough question to answer because without Service consumers, the cloud wouldn’t exist at all. However, it is not the goal of the cloud provider to meet any specific business requirements. Rather, the requirements are aggregated to create a business “persona” that is the focus of continual Service releases. In this manner, one could argue that there are no enterprise architects providing any value in this environment. The most pervasive form of architecture done in these environments is more akin to Information Technology Infrastructure Library (ITIL) approaches rather than any form of enterprise architecture (EA). Utility clouds are the domain of infrastructure experts, not business-IT gap bridgers or process modelers, and one could argue that this status quo will probably never change.

Architecture and the Application (Process) Cloud

However, the utility Service vision of the cloud is not the only one. Indeed, we’re starting to see the emergence of application and process clouds that provide the same infrastructural and economic benefits of clouds, but applied to process-specific concerns. These cloud providers enable the outsourcing of entire processes that run in a virtualized cloud environment as a way of handling variability in scale. For example, an insurance company can use a cloud provider's claims processing Service when their internal capacity is not sufficient to meet demand. As long as the process is Service-oriented, this approach works well and leverages the strength of the cloud's abstract infrastructure capability while staying focused on the process. This way, an organization can have its internal processes augmented by third-party cloud processes. For example, insurance clouds provide elastic capabilities for insurance applications as demand ebbs and flows. Likewise, banking, supply chain, retail, and other process-specific clouds provide cloud computing benefits for specific groups of business users.

In this environment, the cloud provider needs to balance two different, but equal concerns:

. . . the job of the enterprise architecture team is to optimize the conceptual equation of producing the smallest set of Services that meet the largest number of business processes.

infrastructural issues of the sort described above, and the challenge of meeting continuously changing business requirements. When application-specific cloud provider architect groups meet, their conversations look very different from utility Service cloud providers. Rather than focusing on infrastructural issues as they try to meet the common denominator of needs (“speeds and feeds”), the conversation usually revolves around how the team will meet new business process requirements given the existing set of Services and infrastructure. In many ways, these teams have a true EA conversation: the continuously changing and diverse business requirements on the one hand, and the technical capabilities on the other. These EA conversations invoke aspects of Agile Methodologies and EA frameworks more so than ITIL. Rather than trying to minimize the set of business processes handled by the cloud, they seek to continuously expand the universe of processes addressed.

As we often discuss in our Licensed ZapThink Architect (LZA) SOA training courses, the job of the enterprise architecture team is to optimize the conceptual equation of producing the smallest set of Services that meet the largest number of business processes. You don’t want to produce too many Services, otherwise there’s waste. Likewise, you don’t want to produce too few Services as that constrains the number of business processes you can address. As new Services are introduced, the universe of business processes addressed likewise increases. Since application / process-specific cloud providers are businesses that must justify their existence by staying focused on the business without impacting existing operations. Sounds like something all enterprise architect teams should do, no?

The ZapThink Take

In many ways, the discussion of architecture has been given short shrift in cloud computing conversations. In much the same way that the Service-Oriented Architecture (SOA) conversation degenerated into a conversation about the (often unnecessary) Enterprise Services Bus (ESB), the cloud conversation is degenerating into one about the infrastructure needed to handle scalable Service provider volume. And where is the conversation about the business process? Unless you are planning to build a general-purpose Service provider cloud to compete with the likes of Amazon.com and others, you should be focused on where the opportunity is: in the process. And to focus on the process while keeping an eye on the technology requires an enterprise architecture perspective.

The mistake that many cloud-consuming companies are making is that the cloud is giving them an excuse not to think about enterprise architecture at all.

Once again, the refrain is that SOA is not something you buy, but something you do. Perhaps we can start hearing the same mantra with cloud computing?

The thought going through the head of many a supposed architect is: “whew, thank goodness we’re putting this in the cloud so that I don’t have to invest in architecture.” Wow, what a mistake. These companies will be in for a rude awakening when they realize that all they’ve done is shifted their internal mess, which at least they have some control and visibility over, to an external mess that they have less control over. Enterprise architecture doesn’t go away simply because someone else is hosting or providing your Services. Organizations that want to have any chance of improving their agility, flexibility, reliability, and performance need to be in charge of their own architecture. There is no other option.

Given that too few cloud computing providers have your business in mind when they architect their solutions, and the ones that have a process-specific business model and approach aren’t concerned with your specific business, it lands upon the laps of enterprise architects within the organization to plan, manage, and govern their own architecture. Once again, the refrain is that SOA is not something you buy, but something you do. Perhaps we can start hearing the same mantra with cloud computing? Or will the cloud succumb to the same short-sighted, market pressure that doomed the ASP model and still plagues SaaS approaches? It’s not up to vendors to answer this question. It’s up to you … the enterprise architect. There are no short-cuts to EA.

This guest post comes courtesy of
ZapThink. Ron Schmelzer, a senior analyst at ZapThink, can be reached here.


SPECIAL PARTNER OFFER

SOA and EA Training,Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.