Wednesday, June 24, 2009

In 'Everything as a Service' era, quality of services and processes grows paramount, says HP's Purohit

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

As services pervade how and what IT delivers, quality assurance early and often becomes the gatekeeper of success -- or the points of failure.

IT's job is evolving to make sure all services really work deep inside of business process -- regardless of their origins and sourcing. Quality of component services is therefore assurance of quality processes, and so the foundation of general business conduct and productivity.

Pervasive quality is no longer an option, especially as more uses of cloud-enabled services and so-called "fluid sourcing" approaches become the norm.

A large part of making quality endemic becomes organizational, of asserting quality in everything IT does, enforcing quality in everything IT's internal and external partners do. Success even now means quality in how the IT department itself is run and managed.

To better learn how service-enabled testing and quality-enabling methods of running IT differently become critical mainstays of IT success, last week at HP Software Universe in Las Vegas I interviewed Robin Purohit, vice president of Software Products at HP Software and Solutions.

Here are some excerpts:
Severe restrictions on IT budgets force you to rethink things. ... What are you really good at, and do you have the skills to do it? Where can you best leverage others outside, whether it’s for a particular service you want them to run for you or for help on doing a certain project for you? How do you make sure that you can do your job really well, and still support the needs of the business while you go and use those partners?

We believe flexible outsourcing is going to really take off, just like it did back in 2001, but this time you’ll have a variety of ways. We can procure those services over the wire on a rateable basis from whatever you want to call them -- cloud providers, software-as-a-service (SaaS) providers, whatever. IT's job will be to make sure all that stuff works inside the business process and services they’re responsible for.

If you think of it as marketplace of services that you're doing internally with maybe many outsource providers, making sure every one of those folks is doing their job well and that it comes together some way, means that you have to have quality in everything you do, quality in everything your partners do, and quality in the end process. Things like service-enabled testing, rather than service-oriented architecture (SOA), is going to become a critical mainstream attribute of quality assurance.

... What IT governance or cloud governance is going to be about is to make sure that you have a clear view of what your expectations are on both sides. Then, you have an automatic way of measuring it and tracking against it, so you can course correct or make a decision to either bring it back internally or go to another cloud provider. That’s going to be the great thing about the cloud paradigm -- you’ll have a choice of moving from one outsource provider to another.

The most important things to get right are the organizational dynamics. As you put in governance, you bring in outside parties -- maybe you’re doing things like cloud capabilities -- you're going to get resistance. You’ve got to train your team to how to embrace those things in the right way.

What we’re trying to do at HP is step up and bring advisory services to the table across everything that we do to help people think about

It’s all about allowing the CEO and their staffs to plan and strategize, construct and deliver, and operate services for the business in a co-ordinated fashion, and link all the decisions to business needs and checkpoints

how they should approach this in their organization, and where they can leverage potentially industry-best practices on the process side, to accelerate the ability for them to get the value out of some of these new initiatives that they are partaking in.

For the last 20 years, IT organizations have been building enterprise resource planning (ERP) systems and business intelligence (BI) systems that help you run the business. Now, wouldn’t it be great if there were a suite of software to run the business of IT?

It’s all about allowing the CEO and their staffs to plan and strategize, construct and deliver, and operate services for the business in a co-ordinated fashion, and link all the decisions to business needs and checkpoints. They make sure that what they do is actually what the business wanted them to do, and, by the way, that they are spending the right money on the right business priorities. We call that the service life cycle.

... There are things that we're doing with Accenture, for example, in helping on the strategy planning side, whether it’s for IT financial management or data-center transformation. We're doing things with VMware to provide the enabling glue for this data center of the future, where things are going to be very dynamically moving around to provide the best quality of service at the best cost.

... But, users want one plan. They don’t want seven plans. If there’s one thing they’re asking us to do more, faster, better, and with all of those ecosystem providers is to show them how they can get from their current state to that ideal future state, and do it in a coherent way.

There's no margin for error anymore.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

HP adds new consulting services to smooth the enterprise path to cloud adoption

Hewlett-Packard (HP), which already has an array of services to help customers chart their way through the challenges and opportunities of cloud computing, loaded two more arrows into its quiver yesterday with the announcement of HP Cloud Discovery Workshop and HP Cloud Roadmap Service.

The Palo Alto, Calif-based global IT giant is aiming the services at enterprise customers who are looking to efficiently drive business benefits from cloud. The goal, according to HP, is to help customers source, secure, and govern cloud services as an integral part of their IT strategy. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The HP Cloud Discovery Workshop is designed to help enterprise IT organizations learn about the cloud as a strategic service delivery option and how to leverage it as part of a broader IT service delivery strategy. Available in July, the workshop is designed to:
  • Educate customers on the cloud and multi-sourcing service delivery strategies

  • Outline benefits, risks and implications of the cloud within the business

  • Provide recommendations on people, process and technology for using the cloud as part of an IT and business strategy.
The HP Cloud Roadmap Service, a follow-on to the HP Cloud Discovery Workshop, offers customers a service for planning and adopting cloud as part of their service delivery strategy. The service will:
  • Provide specific recommendations for how customers should use cloud delivery and deployment models as part of their service delivery strategy

  • Recommend the right service strategy, governance and program model

  • Deliver a roadmap for cloud adoption, including a set of recommendations and benefits for what customers can achieve in practical and incremental steps
The array of cloud services currently offered to HP customers includes:
One of HP's newest offerings, prior to today's announcement, was Cloud Assure, which consists of HP services and software, including HP Application Security Center, HP Performance Center and HP Business Availability Center, and is delivered to customers via HP Software as a Service.

Cloud and upgraded computing future brightens despite overcast economy, Microsoft-sponsored survey finds

Even in this global recession, one-third of IT organizations plan to increase virtualization, including cloud-computing initiatives, in the next two years, according to a survey conducted by Harris Interactive and sponsored by Microsoft.

Seeing the downturn as an opportunity to upgrade, nearly two-thirds of 1,200 IT professionals surveyed in the U.S., U.K., Germany, and Japan plan to invest in new infrastructure technology, according to results released this week.

While news in the past year focused on budget cuts for IT, 98 percent of those surveyed by Harris are planning either to maintain or increase investment in infrastructure technologies. Top priorities for those taking the bold approach of investing and innovating despite tight budgetary times, include:
  • 42% plan increased investment in virtualization.

  • 36% plan increased investment in security.

  • 24% plan increased investment in systems management.

  • 16% plan increased investment in cloud computing.
Not surprisingly, keeping the lights on in the glass house is a bigger priority than innovation.

When the question comes down to investing in innovation versus “keeping the lights on,” the IT pros in the four countries surveyed responded that 37 percent of their budget is going to innovation while 63 percent is going to keep the lights on.

Innovation is less of a priority in the U.S. than it is in the other countries. When the percentages are broken out in the Harris report:
  • US: 29% innovation, 71% “keeping the lights on.”

  • UK: 41% innovation, 59% “keeping the lights on.”

  • Japan: 41% innovation, 59% “keeping the lights on.”

  • Germany: 35% innovation, 65% “keeping the lights on.”
These percentages may reflect the fact that U.S. IT had been hit harder by the recession than those in the other countries, said Bob Kelly, corporate vice president of infrastructure server marketing at Microsoft. But the overall percentages show a slight trend to innovation, he said during a teleconference highlight the survey results.

“The ratio of 63 to 37 percent is actually slightly changed,” Kelly said. “About two years ago when we did similar research we saw that it was 70/30. So really, in this downturn we’re seeing an increased focus on innovation.”

Further he said the current research indicates that companies are falling into two categories when it comes to dealing with the recession. One group is retrenching and just holding on to their existing IT infrastructure while waiting for the recovery. The second group actually views IT innovation strategically as a way to pull out of the recession.

“They’re seeing this as their strategic opportunity to really make hay and move their business forward to accelerate out the other side of this economic downturn,” Kelly said.

The survey confirmed Microsoft’s in-house belief that IT budgets still have room for investment in infrastructure innovations, he said. The Redmond folks hope that will include convincing corporate IT departments, which pretty much skipped the Vista era, to finally move from Windows XP to Windows 7.

More survey highlights are available at the Microsoft Core Infrastructure Optimization site.

BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.

Tuesday, June 23, 2009

Web data gains some due respect as Kapow eases it into mission critical enterprise uses

In this Web 2.0 world, enterprises increasingly need data from public websites, including news sources such as CNN and even social networking sites such as Facebook, for integration into business intelligence (BI) and service-oriented and web-oriented architecture (SOA/WOA) applications.

Kapow Technologies
, which provides tools designed to speed finding, downloading, cleaning, and integrating data and content from the web, is releasing a new version of Kapow Web Data Server (formerly the Kapow Mashup Server) today. The new version includes a handy new “URL Blocking” feature that screens out web junk, such as banner ads, insuring that only data needed for the application in being downloaded. [Disclosure: Kapow Technologies is a sponsor of BriefingsDirect podcasts.]

Recently Stefan Andreasen, founder and CTO of Palo Alto, Calif.-based Kapow, demonstrated his company's value around managing data services quickly, without hand coding. At the Web 2.0 Expo in April, he demonstrated a, iPhone mashup application created using Kapow tools and IBM Rational EGL as an example of the conference's "Power of Less" theme.

“Traditionally, it would have taken at least three months and significant IT resources to create and integrate a web data source and serve it to a mobile device," Andreasen explained prior to the demo, "but today, through rapid application development technology from Kapow Technologies and IBM, two developers spent a total of three hours creating a dynamic personalized web application for the iPhone."

Kapow boasts that the Web Data Server 7.0 is “the industry’s only platform that can access, enrich and serve web data with complete assurance ― 100 percent of data, 100 percent of the time.”

The value is more than for convenience. More than ever, web-based content plays an essential role in many business processes and analytical presentations. Doing operational and business ecology business intelligence (BI) requires fast and easy integration of web-based content and data assets.

With Kapow's patented visual development and Web data automation platform customers can gain data access to any intranet or extranet business application, as well as any website or application on the web, the company says. This cuts out manual approaches, now quite common.

Rapid data access is vital for today's agile application development, like mobile, WOA and other types of agile business applications, Andreasen says. Regardless of whether

. . . today, through rapid application development technology from Kapow Technologies and IBM, two developers spent a total of three hours creating a dynamic personalized Web application for the iPhone.

or not developers have programmatic access via an application programming interface (API), Kapow provides easy access to enterprise and public web data, then extracts and transforms it into a standard web service or data feed, he explains.

A key element in the data server are the Kapow robots that the company says “use standard web protocols and security mechanisms to automate the navigation and interaction with any web application or website, providing secure and reliable access to the underlying data and business logic.”

Offering an example of an application built with its technologies, the company points to a hypothetical sales app providing “a full 360-degree view of prospects and customers by automatically extracting data from internal customer relationship management (CRM) systems, subscription data feeds such as Edgar Online, corporate sites, blogs and social media sites including LinkedIn, Technorati and Facebook.”

New features in the Kapow Web Data Server 7.0 version include:
  • "100 Percent Browser Engine Compliance," which handles complex web data sources, including JavaScript and AJAX intensive Websites.

  • Intuitive point-and-click integrated development environment (IDE) for “surgical data extraction accuracy with no coding.”

  • Scalability improvements offering real-time performance optimization and the ability to download large file downloads directly to disk for enterprise scale projects

  • Browser-Based Scheduler, which provides automation of data refresh and synchronization schedules.

  • Authentication for RoboServer, which provides “seamless integration with existing enterprise security and authentication systems.”
Availability and Pricing

Further information and pricing is availabile at http://kapowtech.com/index.php/products/overview.

BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.

SaaS delivery of IT lifecycle and quality management functions evolves toward an IT service-delivery solutions approach

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

When people think of Software as a Service (SaaS) and web services delivery, they often envision business applications like salesforce automation, email, and human resources management.

But Hewlett-Packard has been delivering quality assurance and applications performance management functions via SaaS for years. It's Business Technology Optimization (BTO) services, part of its Mercury acquisition, made the leap to SaaS delivery long before web-based business applications became popular. You could say SaaS for developers and testers -- code warriors -- paved the way for SaaS for salesmen -- road warriors.

Now, as interest in cloud computing ramps up, the ability to deliver more aspects of IT lifecycle and quality management, along with broader project and portfolio oversight values, is also ramping up. Yet a missing ingredient for IT innovators has been how to begin and how to organize these sourcing changes effectively.

Such a SaaS whole greater than the sum of its web services parts will better help IT managers do more with less, and provide better applications faster, as well.

To better understand the expanding role of SaaS within IT, and how professional services can newly help in the transition to holistic SaaS use by IT departments, I interviewed two executives last week at HP's Software Universe conference in Las Vegas: Scott Kupor, vice president and general manager of Software-as-a-Service, and Anand Eswaran, vice president of professional services, both in HP's Software and Solutions group.

Here are some excerpts:
Kupor: At HP for the last nine years, we've been selling IT management applications as a service delivery option. If you think about things like testing, performance management, or project and portfolio management (PPM), for example, those are traditional IT applications that we’ve been selling with this similar delivery model.

What we’ve been hearing from customers today at the conference are two key things. Number one, the cost benefits that initially drove them to SaaS are ever present and incredibly more important in this financial environment. The benefits are really coming to fruition. The second is that we’re starting to see a migration of SaaS from what was traditionally testing services toward other more complex and more customizable IT management applications.

We’re hearing a lot of interest from customers around IT service management (ITSM), service desk applications, and service management applications. These are things that have traditionally been the domain of inside-the-firewall deployments. Customers are now getting comfortable with the SaaS model so much so that they’re looking at those applications as well for deployment in a SaaS environment.

Eswaran: We’ve made a very conscious shift from what was inherently deployment of products. The approach right now is transformed into what business outcomes can we achieve for the customer, which is something which we would have been unable to do some time back.

We have changed focus now from deploying a single product set to achieving outcomes like reduction of outages by 40 percent, increasing quality, getting service-level agreements to a certain point, and guaranteeing that level of service. That’s been hugely helpful.

All of what we do at the back end, whether it’s how we leverage SaaS, what products we use, what software we use, what consulting and professional services we use, all of that is going to be transparent to the customer. What they care about is a service, which we will deliver to the customer. SaaS enables us to get to that service, get to that time-to-market, much faster.

This all gets us to the point of what customers refer to as "killing the game," getting to a point of being able to offer outcome-based pricing and guaranteeing that outcome, as opposed to the traditional consulting model of billing rates and hours.

Kupor: Remember, all these are complex IT management applications, they have third-party integrations.

That’s really what IT’s job is -- to help deploy business applications and govern the integrity, security, the authenticity, and the performance of those applications.

They have custom code that customers are building on top. Those are all areas of domains of expertise for the services organization. Through the work that [Anand's team] and we are doing together, we can together deliver a cost-effective delivery option for customers, but without having to sacrifice the complexity, integration, and customization opportunities that they demand for these applications.

We’ve heard this a lot from our customers today, that they’re actually interested in looking at how, as an IT department, they can deploy their own applications in a third-party cloud environment. You hear a lot of people talking about infrastructure on demand or computing power on demand.

People are looking toward these third-party products as a way to basically take an application they’ve built in-house and deploy them externally in, perhaps, an Amazon environment or a Microsoft environment. Where the interesting opportunity is for us, as a management vendor, is that customers will still need the same level of performance, availability, security, and data integrity, associated with applications that live in a cloud environment as they have come to expect for applications that live inside their corporate firewall.

We’ve been talking to customers a lot about something called Cloud Assure, which is the first service offering that HP has brought to market to help customers solve those management problems for applications they choose to deploy in a cloud-based environment.

That’s really what IT’s job is -- to help deploy business applications and govern the integrity, security, the authenticity, and the performance of those applications.

Eswaran: Everything is eventually going to get transformed into a service for the customer, so that they can actually focus on the core business they are in. When you have things transformed into a service, everything we do to offer that service should be transparent to the customer.

It becomes a services-led engagement, but that’s where we clearly differentiate "services" from "service," the singular, which is the eventual outcome the customer needs to create for themselves. That’s why we really partner well between SaaS and Professional Services. We believe that we are on a path of convergence to eventually get to offering business value and a service to a customer.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Platform applies HPC lessons to 'private' cloud creation, operations, efficiency

More enterprises are looking to the cloud compute model -- both public and private -- to efficiently support myriad applications and data workloads. Platform Computing, a pioneer in high-performance computing (HPC), is now jumping into the fray with a private cloud management platform: Platform ISF.

Platform ISF, which becomes the centerpiece of the company's cloud computing strategy, creates a shared IT infrastructure from physical and virtual resource pools, to deliver application hosting environments, according to automated workload and resource scheduling policies. The Markham, Ont. company said its new offering will be released in beta this week, with general availability planned for the fall.

Platform ISF leverages Platform’s resource sharing technology, EGO, and its virtual machine orchestrator (VMO), combine to deliver an infrastructure-sharing platform. Platform has also built in additional capabilities for self-service, reporting and billing -- helping to make clouds a bill-as-you-go affair (a fringe benefit of IT shared services). This is also expected to drastically reduce the costs of IT, as resource utilization levels increase thanks to resource sharing.

Platform ISF is a technology-agnostic cloud computing management platform that supports any collection of hardware, operating systems and virtual machines, said Songnian Zhou, CEO, chairman and co-founder. This allows organizations to leverage existing resources and corporate standards, as they build and deploy private clouds.

Platform's private cloud software, the elevate of its grid capabilities, allows implementers to access IT infrastructure via portals using visual interfaces, or programmatically via Java, web services, .NET and other popular frameworks, Zhou told me last week in a briefing. Platform ISF offers a "meta template" of workload support environments, allowing for flexible requests for resources, all of which can be charged back in granular fashion to the actual consumers of the IT resource services.

While third-party "public" clouds can offer raw infrastructure and computer resources on a pay-per-use basis, most enterprises will probably use a combination, or hybrid, of both internal and public cloud resources. Platform ISF acts as the management layer for pulling such disparate resources into a unified environment and is independent of location or ownership of resources.

And, Platform ISF, is governance agnostic, allowing for third-party governance to additionally manage how such cloud services are used, provisioned and automated -- to an IT departments requirements.

While Platform has been around for a long time, they're hardly become a household word. This may not be a bad thing, according to Derrick Harris at GigaOm:
Of course, Platform is no IT behemoth, which also could work in its favor. While they might consist of useful pieces, cloud offerings from companies like IBM, Microsoft and HP can be difficult to grasp. They can involve an array of systems management tools, servers and other products that leave customers dizzy — and potentially locked in.
Jon Brodkin, writing at The Industry Standard, quotes Forrester analyst James Staten, who enumerates other players in the cloud management field -- 3tera, Elastra, Enomaly, and Zimory, and the open source Eucalyptus -- but says that all of them, unlike Platform lack at least one of the elements necessary to build a cloud.

I was impressed with Platform's heritage of providing HPC grid services for 15 years as a precursor to cloud street cred. Platform's approach can be used by enterprise IT departments to move to cloud benefits, on their terms, rather than the fantasy notion of cloud being best approached without IT.

As Zhou says, "Cloud is built, not bought." I couldn't agree more.

Expect Platform ISF to be used on business intelligence workloads early on, with J2EE, and PaaS to follow close on. Oh, and we ought to expect more HPC loads and requirements to be make via public-private tag-team clouds too.

Compuware spruces up IT portfolio management with Changepoint refresh

By David A. Kelly and Heather Ashton

This guest post comes courtesy of David A. Kelly, principal analyst and Heather Ashton, senior analyst, at Upside Research. You can reach them here.


It’s hard to improve if you don’t have a way to measure how you’re doing. That’s one of the reasons why IT portfolio management solutions have been started to generate a lot of interest over the past few years. IT portfolio management solutions help organizations manage IT costs, make better IT funding decisions and help align business and IT objectives.

At the Project Portfolio Management Summit in California on June 15, Compuware unveiled a juiced-up version of its IT portfolio management solution, Changepoint, identifying agile development and delivery as key components of increasing value to customers over the next 12 likely-recessionary months. [Disclosure: Compuware is a sponsor of BriefingsDirect podcasts.]

As a business-centric IT management solution, Changepoint (get a free Upside Research report) is designed to help IT and business managers gain better visibility into the enterprise IT environment. As most enterprise IT departments have experienced, the investment lifecycle decision-making process for IT has historically been a pain point. In most instances, IT departments have been plagued with either over-allocating or under-allocating their funds. Changepoint is designed to provide executive-level visibility into IT spending, building trust between IT and management. The reality of shrinking IT budgets makes this visibility a necessity as organizations seek to optimize operational demands for IT resources.

In the face of the “new economy,” IT portfolio management solutions are becoming a necessary tool for IT to meet today’s economic challenges. Compuware is hoping that the new features it has added to Changepoint will increase usability and end-user adoption. Among the deliverables that Compuware announced in its year-long-roadmap for Changepoint are added managed services to assist with optimizing Changepoint ROI; bundling in Vantage (Compuware’s IT service management solution) to monitor usage and ensure adoption of Changepoint; and leveraging industry-standard middleware to facilitate integration to financial, HR and help desk applications.

The first deliverable is the Agile Accelerator, designed to deliver best practices for managing agile software development projects. Compuware is tapping into the movement by IT departments to use agile development and delivery to improve responsiveness to the business.

Not to rock the boat too much, and to reassure those IT development groups that prefer to stick to more traditional waterfall type projects, Changepoint will continue to support

The end result is the ability for an IT department running some agile projects to manage those projects within the broader scope of the overall project portfolio.

existing methodologies while also encouraging new approaches such as agile delivery to speed time-to-market, a critical component of achieving ROI on IT projects.

The end result is the ability for an IT department running some agile projects to manage those projects within the broader scope of the overall project portfolio.

While recent economic conditions make it difficult for some IT organizations to invest in new technologies at this point, it’s always worth it to step back and evaluate the decision-making process around IT investments and the potential value that IT portfolio management solutions might bring.

Organizations with existing and effective application and IT metrics or a limited number of projects or applications may not find enough value to warrant IT portfolio management solutions. But any organization managing numerous projects, dynamic business environments, limited investment resources or the need for more effective and efficient decision-making processes may find significant value in portfolio management.

This guest post comes courtesy of David A. Kelly and Heather Ashton at Upside Research. You can reach them here.

Monday, June 22, 2009

Eclipse plug-in puts TOGAF 9 into IDE collaboration mode for architects

The Open Group, a technology-neutral consortium, today released an Eclipse plug-in that puts TOGAF 9 capabilities literally at your fingertips. The TOGAF Customizer was donated to The Open Group by Capgemini.

Based on the Eclipse Process Framework (EPF), an open-source project managed by the Eclipse Foundation, the TOGAF Customizer can be used to implement TOGAF 9 more easily. TOGAF is an industry-consensus framework and method for enterprise architecture (EA) developed by The Open Group, and released in February. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

The new customizer contains all the content of TOGAF 9 in a structured and editable form, including guidelines, concepts, and checklists, as well as detailed work breakdown structures for the framework’s new and improved architecture development method (ADM).

In a nutshell, moving TOGAF into an industry-standard IDE brings a Web 2.0 flavor to the document, making it akin to a wiki. What's more, collaborating via an IDE's built-in communications and sharing attributes -- as well as version management -- can make TOGAF more into a "living" document, and eases innovation and ongoing improvement.

With the new tool, users can align their EA practices with TOGAF 9 and create organization-specific versions of the standard that represent the concerns of their unique business and technology environments. All goes into and out of a common repository. In addition, the new tool makes it much easier for enterprise architects to integrate TOGAF with other common EA frameworks, such as Zachman, FEAF and DoDAF.

Key features and benefits of the TOGAF Customizer include:
  • Specific constructs for tasks and steps enable processes to be formally defined with related content, such as inputs, outputs, roles and responsibilities

  • Supporting editor allows users to make changes to the standard TOGAF framework content and tailor it to their specific organizational context

  • Underlying content management system supports group collaboration, editing and versioning

  • Plug-in architecture allows new content packages, including document templates, to be created and linked to TOGAF
The new plug-in is available for download from: www.opengroup.org/togaf/.

Many architects are familar with the development lifecycle, and many developers have designs on becoming archiects, so the melding of two essential IT fucntions on a common pallette, so to speak, makes a great deal of sense.

I can hardly wait for what we've seen so far with Google Wave to come into prime time. Combining what Google Wave, the Eclipse IDE and TOGAF 9 does will make for a powerfully productive future.

And, of course, we should never under estimate the power of the community effect. I expect we'll see quite a bit of novel innovation from how users leverage and expand on what the framework in an IDE value only begins with.