Friday, June 27, 2008

Progress Software continues SOA buying spree with Mindreef acquisiton

Progress Software has followed its acquisition of IONA Technologies earlier this week with another acquisition, this time picking up privately held Mindreef, Inc., the Hollis., NH maker of the SOAPscope product family.

SOAPscope's quality and validation tools should dovetail with Progress' quality of service (QoS) emphasis quite well. Can't poke around for overlaps or redundancies on this one. Seems a clear addition to the burgeoning Progress solution set.

Included in the SOAPscope family are SOAPscope Server, SOAPscope Architect, and SOAPscope Developer, all of which will join the Progress Actional SOA Management product family. This combination of Actional and Mindreef service-oriented architecture (SOA) governance products provides visibility, control, and validation both across the entire lifecycle of an SOA initiative and at each stage of a SOA deployment, say the companies.

Financial details of the acquisition were not announced. In fact, as of this writing, the Progress Web site is silent on the acquisition, while the Mindreef site has been updated with its status as a unit of Progress and a FAQ for customers.

According to the FAQ, full details of the acquisition strategy will be announced in mid-July, when Progress, of Bedford, Mass., hosts a webinar for all Mindreef customers. What we do know, however, is that the current Mindreef products will keep their own names, at least for the time being, while Mindreef will adopt the Progress company name.

The FAQ also addresses the question of how the Mindreef acquisition fits in with the Progress strategy:

In January 2008, Progress introduced the concept of real-world SOA, which embodies three key challenges: distribution, quality of service (QoS) and heterogeneity. Mindreef’s core competency in quality and validation tools directly aligns with QoS, where QoS helps organizations strive for a SOA that is fast, reliable, scalable and secure, and thus strengthens our overall go-to-market strategy moving into 2009.

Earlier this week, as I reported on Wednesday, Progress bought IONA Technologies for a little over $100 million, which broadened Progress' position in the SOA marketplace.

Last August, I had a podcast discussion with Colleen Smith, managing director of Software as a Service for Progress, in which we discussed the company's acquisition strategies, among other things.

I took my first briefing with Mindreef, given their neighborly proximity, about three years ago. The seasoned team had a hit on their hands with SOAPscope, and their timing in the SOA market was great. But I'm not sure the company grew as was hoped, and perhaps the fast evolution of SOA beyond a WS-* emphasis played a role. SOAP hasn't blossomed to quite the degree some people had forecast.

In any event, I expect this was a happy transition.

Eclipse Foundation delivers Ganymede train with 23 cars, but where are the cloud on-ramps?

Not all trains run on time, but The Eclipse Foundation has kept to its schedule with its annual release train, this year named Ganymede.

For the third year in a row, the Eclipse community has delivered, on the same day as in previous years, numerous software updates across a wide range of projects.

This year's iteration includes software that spans 23 projects and represents over 18 million lines of code. Highlights of the release include the new p2 provisioning platform, new Equinox security features, new Ecore modeling tools, and support for service-oriented architecture (SOA).

Now that the Eclipse Foundation has proven it mettle with delivery of consistent and complete packages of downloads -- now's the time to take this puppy to the cloud. I'd like to see more integration between Eclipse products and cloud-based development, integration and deployment services. And I'm not alone on these wants, no siree.

Amazon Web Services has proven the demand and acceptance. A modern IDE needs the cloud hand-offs and test and real-world performance proofing that cloud and platform as a service (PaaS) are now offering. How about a hybrid model where the IDE remains local but more application lifecycle management and test and debug features comes as services?

How about integration between such a hybrid model and then associated ease to choose among a variety of cloud deployment partners and models? Build, test, and deploy across many providers and models, all close in the bosom of Eclipse. All supported by the community. We could call it Eclipse Cloud Services (ECS). I'm in.

Well, until IBM figures that out, here are the lastest and greatest on earth-bound Eclipse. Key features of the release for SOA support include:
  • SCA Designer, which provides a graphical interface for developers who wish to create composite applications using the SCA 1.0 standard.

  • Policy Editor, a collection of editors and validators that makes it easy for developers to construct and manipulate XML expressions that conform to the WS-Policy W3C standard.

  • Business process modeling notation (BPMN) Editor that allows consumers to construct and extend the BPMN 1.1 standard notation to illustrate business processes.
For Equinox and runtime projects:
  • A new provisioning system, called p2, makes it easier for Eclipse users to install and update Eclipse.

  • New security features, including a preferences-like storage for sensitive data such as passwords and login credentials and the ability to easily use the Java authentication service (JAAS) in Equinox.

  • Rich Ajax Platform (RAP) 1.1, with new features, including the ability to customize the look and feel with Presentation Factories and CSS and the ability to store application state information on a per user basis.

  • The Eclipse Communication Framework (ECF), with real-time shared editing and other communications features to allow developers to communicate and collaborate from within Eclipse.
Developer tools include:
  • A new JavaScript IDE, called JSDT, provides the same level of support for JavaScript as the JDT provides for Java. New features include code completion, quick fix, formatting and validation.

  • The Business Intelligenec Reporting Tools (BIRT) project now provides an improved JavaScript editor and a new JavaScript debugger for debugging report event handlers.

  • DTP has added a new graphical SQL query editor, called the SQL Query Builder, and improved usability of connection profile creation and management for users and adopters/extenders.
More information on all of the new the features can be found at the Ganymede Web site.

The idea behind the yearly release train, according to the Eclipse Foundation, is to provide predictability and reliability for developers in an effort to promote commercial adoption of the Eclipse community's projects.

ZDNet blogger Ed Burnette details how Eclipse maneuvered pieces of the new release out to mirror sites in an attempt to avoid the type of logjam created when the new Firefox went live recently. Apparently, it was only partly successful, although, according to Ed, things have since smoothed out.

Ganymede, named after one of the moons of Jupiter, eclipsed the previous releases, names Callisto and Europa, also moons of Jupiter. Last year's release train encompassed 21 projects, and Europa, the 2006 release, included only 10.

Ganymede is available for download, in one of seven packages, on the Eclipse Web site.

Coveo G2B for CRM provides search-based single customer view from disparate content sources

As companies search for the holy grail of a "single view" of the customer, Coveo Solutions, Inc., which provides search-powered enterprise information access, has unveiled its Coveo G2B for CRM, a way to provide a view of all relevant customer data from a wide variety of sources.

G2B for CRM brings together data from such sources as salesforce.com, Siebel Systems, corporate intranets, tech support emails, customer support databases, and enterprise resource planning (ERP) systems.

It also provides advanced content analytics, giving workers the ability to present customer data graphically. Presenting customer data as a spreadsheet or a pie chart allows management or workers in planning, forecasting, and resource management. This can eliminate the need for time-consuming database queries and reporting, even when sifting through millions of documents.

Coveo's approach shows the productivity benefits of enterprise search continue to be explored. Google (appliances), Microsoft (with FAST Search technology) and Autonomy certainly think so.

Newton, Mass.- and Quebec-based Coveo G2B for CRM, built on the Coveo Enterprise Search-platform technology, is part of the company's G2B Information Access Suite, which allows knowledge workers to obtain a unified view of enterprise information.

I'm interested in seeing more mashups of search from across many enterprise and web-based providers (including social networks) to give even more complete and vetted views of customers, suppliers, partners, employees and any others that relate to business activities or ecologies. The information is out there, just waiting to be harvested and managed.

And when are we going to get a single view of IT assets in association with business processes? Increasingly searching IT devices and resources is playing a role in enterprise search too. How about not only getting a single view of the customer but also instant views to the right systems to reach out to them through, or the right integration avenues?

Let's search people and systems and gather insights to the systems context of business along with the social aspects. People, process, systems and search. That's the ticket.

Wednesday, June 25, 2008

Progress to buy IONA in another SOA infrastructure vendor mashup

Call it a Route 128 SOA date. Progress Software is buying IONA Technologies for a little over $100 million in actual value, broadening Progress's service oriented architecture (SOA) portfolio significantly and catapulting Progress into the open source software infrastructure arena.

Progress said Wednesday it is offering $4.05 per share in cash for IONA, a total equity value of $162 million, or $106 million net of cash and marketable securities. Both companies are publicly traded. The transaction is expected to be completed in September.

Both companies come from a long but quite distinct lineage, and both have their U.S. headquarters within a 20-minute drive around Boston's Route 128 technology corridor, from Bedford to Waltham. Progress has its roots in tightly coupled, client-server tools (Progress 4GL) and runtime platforms, while IONA, based in Dublin, Ireland, hails from the CORBA and middleware messaging and integration space.

Only a SOA mashup could make good bedfellows of these. That's because one company's lineage reaches back to the origins of client-server computing (Progress was founded in 1981), while the other reaches back to the emergence of the mainframe world into distributed computing (IONA was formed in 1991). SOA, of course, aims to make these worlds play well together and then build new services on top of the service-enabled older assets to offer business process advantages and efficiencies.

And yet despite their disparate origins, the companies match up quite well, on product capabilities, locations, direction and client verticals. Progress has already been acquiring in a SOA direction with the January 2006 acquisition of Actional Corp., which became the Progress unit focused on SOA management, security, and governance. IONA made a bold move to embrace open source for its SOA portfolio, supporting an open version of its Artix products, while also buying LogicBlaze in April 2007.

Combined with IONA's ESB and middleware products, Progress will emerge as a full-feature SOA infrastructure provider, but with a large installed base in deployed client-server and web applications and a strong presence in IONA's stronghold of finance and telecoms. [Disclosure: IONA is a sponsor of BriefingsDirect podcasts.]

While there is overlap between the registry/repository capabilities for both companies, separate yet interoperable registry/repositories can operate well side by side and any consolidation is fairly straight-forward. In other words, these products could work well together and then combine. The fact that both companies support SOA governance capabilities indicates more an overlap than a conflict.

UPDATE: And I'm reminded too that the Sonic purchase brought an early ESB function set to Progress. This means the combined companies will be supporting and offering several flavors of ESB. Given that IONA already offers several ESB approaches -- both commercial and open source -- this may provide confusion to customers of both companies. A clear and logical ESB story will then need to come from the companies.

Given we've seen more federated approaches to ESB in recent memory, there may well be a Progress Sonic-Artix-FUSE ecology play in the works, for a more complete ESB solution comprised of several actual products and open source options.

UPDATE: More input from bloggers Tony Baer and Joe McKendrick.

The larger question proffered by the merger comes in the relationship between commercial products and open source models. Progress has not shown as vigorous an interest in open source as IONA, which became practically a benefactor to the Apache Foundation on several notable SOA projects. Progress is very much a licenses software company at a time when the software industry is shifting to subscription and services-based approaches.

It's no surprise that IONA has been sold. IONA made it clear it would enter into acquisition talks last February. A rumored suitor was Software AG, which had recently absorbed WebMethods/Infravio. There were questions on whether IONA's open source strategy would survive any such acquisition, too.

I have to believe that the Progress IONA merger means that Progress will welcome the diversification of business models that the IONA open source strategy entails, meaning a segue from per server and per seat licensing to more of a services, support, maintenance and training revenues model. The two companies can enjoy the commercial maturity of their current products while benefitting from the lower R&D and development costs of community-based projects for newer products.

We'll have to wait to see how aggressively the soon-to-be expanded Progress Software ramps up on the open source SOA strategy. What's nice about open source SOA is that is plays well on offense and defense, meaning the supplier can offer the market products and services that can build on its strengths while attacking its competitors on the revenue sources it holds most dear.

Implications on partnering with the Progress-IONA merger will be important. A well-integrated Progress-IONA may be of significant interest to global systems integrators, as they seek options on infrastructure suppliers, and certainly appreciate a support and services model. Progress may find itself more in an ecology play with other open source providers, from HP to Novell to Ingres to, gasp, Sun (not too far away campus-wise in Burlington). Maybe Microsoft is serious about its newly forged openness and focus on supporting rather than subverting enterprise heterogeneity, and so may find Progress a ramping partner.

The merger also shifts Progress's competitive landscape, putting it more up against IBM, Oracle, and Red Hat. I think the open source data base play for Progress therefore has some interesting implications. Perhaps Ingres may make more than a partner.

In the meantime, Progress can assist its applications clients move to the more modern computing paradigms while IONA can help on the back-end for integration and high performance transactions while broadening Progress's share of wallet in more enterprises and verticals. And now, viola, Progress is an open source company. The best part of the deal, therefore, is how these two companies' installed bases give the combined firm a steady yet diversified revenue stream that should build on their legacies -- and their customers' legacies -- well.

Tuesday, June 24, 2008

Sonoa Systems appeals to industry powerhouses with ServiceNet appliance

Sonoa Systems, Inc., which provides software and appliances to ensure data protection and access control in customer-facing applications, has added some world-class enterprises to its customer list.

The Santa Clara, Calif.-based company today announced that Pfizer, the world's largest research-based pharmaceutical company, Warner Music, and Insights OnDemand, a provider of on-demand business intelligence solutions, will be taking advantage of Sonoa's technology backbone. These companies join such other powerhouses as JP Morgan Chase, Level 3, Landslide, Net One and Aizu University.

Sonoa's ServiceNet appliance enables enterprises and software-as-a-service (SaaS) providers to enable secure, flexible access control to manage a large number of clients without the risk of data exposure. The appliance can also monitor service-level agreements (SLAs) to ensure regulatory compliance.

Another key feature is interoperability, designed to reduce the time and effort required to add new clients, even those with different protocol or security requirements. It accomplishes this by mediating across versions, which can be done without generating new code.

Currently, the challenges of providing security for customer-facing applications, especially with diverse clients, are solved with point products or hand-coded software, which adds time and cost, making the solution economically unattractive.

Sonoa ServiceNet employs a network router-like architecture that non-intrusively enforces security, interoperability and performance policies in real-time 'on the wire'. By handling the access control, interoperability and performance issues that occur at cloud scale, teams can ensure maximum performance and scalability in the environment, making it easy to create new products and add new users and services.

Sonoa ServiceNet is available as a network or virtual appliance.

ITIL's influence extends beyond IT operations to enhance SOA, portfolio management and change management

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

Information Technology Infrastructure Library (ITIL) advances have helped IT departments recast themselves as mature and process-oriented. But the role and influence of ITIL, especially version 3, is extending well beyond IT organization and operations improvements to impact such essential endeavors as as service oriented architecture (SOA), portfolio management and low-risk change management.

ITIL, in effect, is fostering cultural and behavioral change inside of IT departments, which also has a direct bearing on general business transformation and the ability of enterprises to innovate and compete writ large.

To help better understand the role and impact of ITIL on actual IT departments in a variety of use-case scenarios, I interviewed two ITIL practitioners, Sean McClean, principal at KatalystNow, and Hwee Ming Ng, solutions architect in HP's Consulting and Integration group. The discussion was recorded June 18 at HP's Software Universe event in Las Vegas.

This ITIL impact podcast comes as part of a series of discussions with HP executives from the HP Software Universe conference. See the full list here.

Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Monday, June 23, 2008

Interview: HP SOA Center director Tim Hall on new business drivers and efficiency benefits from SOA

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

Service oriented architecture (SOA) is at a crossroads, moving from pilot to enterprise status for many companies. As the trends and economics landscapes shift, SOA's benefits and pay-offs are accelerating.

Green and energy-conscious companies are seeing SOA through the context of data center and applications modernization. Toss in a surge of interest in virtualization, ongoing methodological on-ramps to SOA and a budding fascination in cloud computing methods, and we’re looking at the means to accommodate (at lower TCO) all the old and new of IT systems, platforms, framework, applications and delivery services.

That takes data center transformation, and not just adding more servers. We also need is the means to manage the complexity, fragility, scale and cost. HP seems to see this clearly. The goal of data center transformation that goes hand in hand with SOA efficiency is clear, but how to get there is another matter.

To probe deeper into SOA's impact on enterprise IT and business transformation, we sat down with Tim Hall, director of HP's SOA Center products for a discussion moderated by me, recorded June 18 at HP's Software Universe event in Las Vegas.

Listen to this SOA impact podcast, part of a series of discussions with HP executives from the HP Software Universe conference. See the full list here.

Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Saturday, June 21, 2008

Will ebizQ post my comment on proper blog link etiquette?

I wonder if this comment (below) that I left on Joe McKendrick's SOA blog on ebizQ will show up. I'll let you all know.

[UPDATE: They did, they did publish my comment, and did the requested customary link to my blog ... all's well with the web world. No harm done.]

"Thanks for the blog on my storage and SOA observations, Joe. And thanks for the link back to my SearchSOA Q&A.

However, it is customary in blogs to link back to an individual's blog when you reference them by name, though I notice that ebizQ is often stingy on this point. I've had to ask them several times now to do baseline linking.

So for ebizQ's edification, here are the links to use when referencing my name and analysis for the benefit of their readers:

--http://blogs.zdnet.com/Gardner/
--http://briefingsdirectblog.blogspot.com/
--http://www.briefingsdirect.com/
--http://briefingsdirect.blogspot.com/
--http://www.findtechblogs.com/soa/
--http://www.interarbor-solutions.com/home.html

Thanks!"

Friday, June 20, 2008

Interview: HP information management maven Rod Walker describes how BI empowers business leaders to innovate

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

Business intelligence (BI) has been a top investment for corporations in the past several years, but the ability for BI to generate value and strategic direction guidance is merely in adolescence.

In health care, customer retention, energy and oil management, and for global risk reduction and compliance, BI is offering some of the best payoffs from IT and datacenter investments, says Rod Walker, vice president for information management at HP's Consulting and Integration group.

In this podcast discussion, Walker joins me to explore how BI will continue to be one of the most effective ways for business leaders to leverage IT over he next decade. Proper information management -- including all content in all forms, and not just structured data -- provides powerful market analytics and customer and user behavior inferences to enable real-time decisions about core services, product offerings and go to market campaigns.

Listen to this BI business opportunity overview podcast recorded at HP's Software Universe event June 18, moderated by your's truly from Las Vegas. The Walker interview comes as part of a series of discussions with HP executives this week from the HP Software Universe conference. See the full list here.

Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Interview: Dan Rueckert of HP consulting digs into ITIL's role in accelerating SOA, IT service management

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

More enterprise IT departments are working toward Information Technology Infrastructure Library (ITIL) principles and reference models for running their organizations. Yet ITIL can provide more benefits than initially meets the eye, including accelerating service oriented architecture (SOA) adoption, faster mean time to recovery in IT operations, and more effective change management.

Dan Rueckert, worldwide practice director for both the service management and security practices in HP's Consulting and Integration group, explains in an interview the direct and significant ancillary payoffs from ITIL adoption -- from establishing an IT service lifecycle to defining an overall IT service strategy.

Listen to this ITIL overview podcast recorded at HP's Software Universe event June 18, moderated by your's truly from Las Vegas. The Rueckert interview comes as part of a series of discussions with HP this week from the HP Software Universe conference. See the full list here.

Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Thursday, June 19, 2008

Interview: HP Software's David Gee on next generation data center trends and opportunities

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

Enterprise CIOs face mounting challenges that are hard and getting even harder. HP says it has a lifeline for these IT departments and leaders over the next five years by helping them to dramatically cut the size of IT budgets relative to the enterprises' total revenue. This allows a shift on IT spending from operations to innovation via next generation data centers.

David Gee, vice president of marketing for HP Software, in a podcast interview from HP's Software Universe event this week, discusses the large global opportunity for enterprises and service providers to cut the relative size of IT budgets by investing in modern data centers that save energy, consolidate applications, leverage virtualization, and rely more on automation than manual upkeep processes.

Listen to this interview podcast, moderated by your's truly from Las Vegas, for more on HP's plans for next generation data centers that focus IT on the businesses' interests.

The Gee interview comes as part of a series of discussions with HP executives I'll be doing this week from the HP Software Universe conference. See the full list here.

Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Interview: HP's BTO chief Ben Horowitz on how application lifecycles and data center operations can find common ground

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

There may be no greater "silos" in all of IT than the gulf between application development and data center operations. For the sake of enhancing both, however, common ground is needed -- and HP is putting together a path of greater collaboration, visibility, management and automation to engender "application lifecycle optimization" to better bind design time with runtime.

Ben Horowitz, vice president and general manager of HP’s BTO software unit, and former CEO of HP's 2007 acquisition, Opsware, explains in a podcast interview from HP's Software Universe event this week how these hither to fore distinct orbits of IT can finally coalesce.

Through managed requirements collaboration and the use of "contracts" between the designers, testers, business leaders and IT operators, application lifecycle optimization has arrived, says Horowitz. Bringing more input and visibility into applications design, test and refinement, in a managed fashion, allows applications to better meet business goals, while also providing the data center operators better means to host those applications efficiently with high availability, he says.

Listen to this interview podcast, moderated by your's truly from Las Vegas, for more on HP's plans for and philosophy on how BTO and next generation data centers come together.

The Horowitz interview comes as part of a series of discussions with HP executives I'll be doing this week from the HP Software Universe conference. See the full list here.

Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: Hewlett-Packard.

Wednesday, June 18, 2008

HP burnishes vision on how products support both applications and data center lifecycles

HP opened the second day of its Software Universe event in Las Vegas with "product day," but the presentations seemed more about process -- the processes that usher application definitions and development into real world use.

I've heard of applications lifecycle, sure, but the last few days I've heard more about data center lifecycle. So how do they come together? HP's vision is about finally allowing the operations and development stages of a full application lifecycle to more than co-exist -- to actually reinforce and refine each other.

Ben Horowitz, vice president and general manager of HP's BTO software unit, pointed out on stage at the Sands Expo Center that HP is number one in the global market for applications testing and requirements management for software development. And, of course, HP is strong in operations and systems management.

The desired synergies between these strengths need to begin very early, he said, in the requirements gathering and triage phases. Horowitz, the former CEO of HP's 2007 Opsware acquisition, also explained the fuller roles that business, security, operations and QA people will play in the design time into runtime progression. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

I guess we need to call this the lifecycle of IT because HP is increasingly allowing applications requirements and efficient and automated data center requirements to actually relate to each other. You can't build the best data center without knowing what the applications need and how they will behave. And you can't create the best applications that will perform and be adaptable over time without knowing a lot about the data centers that will support them. Yet that's just what IT is and has been doing.

Next on stage, Jonathan Rende, vice president of products for SOA, application security and quality management at HP, painted the picture of how HP's products and acquisitions over the past few years come together to support the IT lifecycle.

Application owners, project managers, business analysts, QA team, performance team, and security teams -- all need to have input into applications requirements, design, test and deployment, said Rende. The HP products have been integrated and aligned to allow these teams to, in effect, do multi-level and simultaneous change management.

Remember the 3D chess on the original Star Trek? That's sort of what such multi-dimensional requirements input and visibility reminds me of. Social networking tools like wikis and micro blogging also come to mind.

Rende then described how change management and process standardization in the requirements, design, develop, test and refinement processes -- in waterfall or agile methods settings -- broadens applications lifecycle management into the business and operations domains.

By allowing lots of changes to occur from many parties and interests in the requirements phase, the IT lifecycle begins in the requirements, but extends into ongoing refinements for concerns about, for example, security and performance testing. Also, the business people can come in and request (and get!) changes and refinements later and perhaps (someday) right on through the IT lifecycle.

I really like this vision, it extends what we used to think of simultaneous design, code and test -- while building advanced test beds -- but extends the concurrency benefits broadly to include more teams, more impacts, more governance and risk reduction. Without the automation from the products, the complexity of managing all these inputs early and often would break down.

HP's products and processes are allowing more business inputs from more business interests into more of the IT lifecycle. The operations folks also get to take a look and offer input on best approaches on how the applications/services will behave in runtime, and throughout the real IT lifecycle.

Because there's also portfolio management benefits applied early in the process, the decision on when to launch an application boils down to a "contract" between those parities affected by the applications in production, said Rende. This allows an acceptance of risk and responsibility, and pushes teams to look on development and deployment as integrated, and not sequential.

Horowitz further explained how HP's announcements this week around advanced change management and a tighter alignment with such virtualization environments as VMWare will allow better and deeper feedback, refinement and efficiency across the IT lifecycle.

This "IT lifecycle" story is not yet complete, but it's come a long way quite quickly. HP is definitely raising the bar and defining the right vision for how IT in toto needs to mature and advance, to allow the enterprises to do more better for less.

IONA develops beefed up capabilities for financial services, STP automation

Financial institution face competitive pressure to automate processes and move toward straight-through processing (STP), while ensuring compliance with multiple messaging standards. IONA Technologies today released a set of enhancements to Artix Data Services designed to reduce risk exposure and operational costs.

The latest release from the Dublin, Ireland-based IONA includes a comprehensive implementation of SWIFTNet MT Standards Release 2008, a new free online validation service, and new offerings for over-the-counter (OTC) derivatives processing and payments STP. [Disclosure: IONA is a sponsor of BriefingsDirect podcasts.]

Based on Artix Data Services, IONA’s open and standards-based development tool for building model-driven data services, the IONA Artix Data Services Standards Libraries include support for over 100 financial messaging standards implementations across 22 standards bodies, offering customers rich, out-of-the-box support for financial messaging data services requirements.

Enhancements within the latest release include additional SWIFTNet MX standards for proxy voting and cash reporting, additions of payments standards for STEP2 and TARGET2, and the addition of Depository Trust and Clearing Corporation(DTCC) Fund/SERV and MDDL. These standards libraries enable institutions to rapidly implement and incrementally deploy reusable financial messaging data services.

The standards library includes support for over 240 MT message types, 2,000 validation rules and 28,000 test cases. The new free IONA Validation Service for SWIFT provides an easy testing solution to verify compliance with SWIFT SR2008 syntax and semantics, allowing customers to model, test and deploy compliant messages today well in advance of the November 15, 2008 SWIFT-mandated deadline.

For automating OTC derivatives processes, IONA provides financial messaging data services tools with extensive support for FpML, (DTCC) Deriv/SERV TIW, SwapsWire and SWIFTNet FpML. IONA’s Payments Modernization solution offering provides support for SEPA (Single European Payments Area), ISO20022, SWIFTNet FIN, EBA STEP2 XCT and ECB TARGET2 payments standards.

More information on IONA’s Artix Data Services Financial Standards Libraries is available at the IONA Web site.

VMWare and HP align products to bring greater management and insight to virtualized environments

Hewlett-Packard and VMWare today announced a deeper product collaboration going forward, offering to enterprises and service providers a single management and control approach to both physical and virtual software infrastructure stacks.

Through the partnership, announced at the HP Software Universe event in Las Vegas, HP's Business Service Management products -- including HP Business Availability Center, HP Operations Center and HP Network Management Center -- will help automate the management of the VMware virtualization platform.

Both the HP Discovery and Dependency Mapping products and HP Universal CMDB configuration data management suite will aid in discovery of virtualized environments for improved tracking and reporting of changes in VMWare virtualized envrionments.

And HP's Business Service Automation capabilities -- including HP Server Automation Center, HP Client Automation and HP Operations Orchestration -- will assist in the oversight and operational upkeep of services running in VMWare-supported virtual infrastructure instances.

HP and VMWare did not unveil any financial partnership news, but the two certainly seem chummy these days. HP clearly sees a huge market opportunity for helping to manage the complexity of virtualized platforms, given he need for enterprises to cut total costs through higher hardware utilization and the ability to dynamically and automatically match computer power supply with applications and storage demand.

The two companies did outline bundling and packaging of their products, in that new software bundles will combine VMware's Infrastructure 3 software suite with the HP Insight Control Environment for additional automation benefits. The goal, said the companies, is to provide a "comprehensive and seamless physical and virtual platform management" capability set.

“We’re expanding our relationship with VMware to jointly develop solutions that provide customers with comprehensive management of virtualized business applications running on the VMware platform,” said Ben Horowitz, vice president and general manager, of HP's Business Technology Optimization software, in a release.

I was just having breakfast yesterday with two systems architects from Seattle, who said they were exploring virtualization, including both VMWare and Xen hypervisors. The liked the potential benefits but were put off by the complexity of setting the stuff up and maintaining it. Their choices, they said, pretty much boiled down to consulting help or more automation in the software.

Yes, says HP, to both. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Tuesday, June 17, 2008

CIOs need an efficiency mastery lifeline, say HP Software Universe keynote speakers

Savvy acquisitions and a perfect storm of trends and economics have given HP's software unit a prominence few would have predicted five years ago. But today HP Software has a big fat data center transformation opportunity staring it in the face.

This isn't your father's ink and PC business. As HP can leverage the open source and Microsoft ecologies, play second source to IBM in many markets (and bigger player in quite a few) and double its services reach with EDS, the enterprise software share of wallet landscape globally is facing disruption and opportunity.

The big question facing HP now is how they make that disruption work for them more than it works against them. Today, in keynote presentations at the opening of the HP Software Universe conference at the Sands Expo in Las Vegas, part of the answer become clearer. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Tom Hogan, senior vice president of HP Software, described the challenges facing CIOs as very hard and getting harder, with the need to adjust to growing risks as well as new opportunities such as mashups, social networks and Web 2.0.

HP says it has a lifeline for the IT departments and leaders over the next five years. The goal is to cut the relative size of IT budgets to revenue for HP's customers. Some of those savings need to shift from operations to innovation, said Hogan. See news from the conferences.

The IT and business growth winners over the next five years will both master efficiency while increasing innovation, he said. HP is spending on R&D and mergers and acquisitions to allow its customers to progress.

The focus on information management is a next large initiative for HP, an area where IBM has been aggressive in acquisitions and market-focused solutions. I guess we can expect R&D and M&A there this year from HP. Maybe an open source database makes sense? Makes sense to me.

Quality, risk, speed, cost, insight, alignment -- these are the areas that HP will provide means for improvement for its customers, said Hogan. The R&D spending at HP is approaching $3 billion per year to help address these improvements.

HP Chairman and CEO Mark Hurd, in taking questions from the audience, said that IT processes must be automated, standardized and that software and services must come together to foster far greater efficiency.

Big trends to keep an eye on today include virtualization and cloud computing, sure, but the explosion of information needs to be managed. The ability to gain intelligence from all the data has never been more important, said Hurd and Hogan. They re-affirmed the role of information management for HP and its customers.

Data warehousing is too costly and needs innovation, said Hurd. It's time to integrate the islands of analytics, but there is too little enterprise-wide BI. Better real-time analytics from larger data sets is the answer, said Hurd. "The key is to get the processes and models right ... We'll take leadership in this market," said Hurd.

In again addressing the pending EDS merger, Hurd said alliances will be enhanced when the ecologies surrounding both companies seek to find ways to leverage each other.

Users sought to get better software support, and Hogan promised improvement via hiring and a better self-help support and portal capability set.

Hurd wants HP to be the best software company at management, including data, systems, processes, services, and integrations, he said. Uber management that reacts in real time and reaches out to all the assets and devices in the enterprise is the major HP software requirement, he said.

In effect, Hurd is modeling how HP operates and what he wants as a CEO to become the roadmap for what HP brings to its customers.

"We transformed 85 data centers to six ... and we found holes (in our go-to-market strategy)," said Hurd. "We understand the way the consumer uses data ... and it will directly effect how the enterprise has to deal with its infrastructure. We don't think of the consumer market and enterprise market as separate, we see it as a continuum," said Hurd.

HP has its sights set on swift yet cost-reduction-intense data center transformation projects

Here at the opening keynotes addresses for the HP Technology Forum event at the Mandalay Bay resort in Las Vegas, it's a bigger crowd than I was expecting. Lots of old Unix shops still supporting the legacy and mission critical apps and platforms but on increasingly commoditized hardware ... that increasingly needs better management.

Toss in a surge of interest in virtualization, ongoing ramp-ups to Service Oriented Architecture (SOA) and a budding fascination in cloud computing methods, and we're looking at the means to accommodate (at lower TCO) all the old and new of IT systems, platforms, framework, applications and delivery services. That takes data center transformation, and not just adding more servers. We also need is the means to manage the complexity, fragility, scale and cost.

HP seems to see this clearly. The goal of data center transformation is clear, but how to get there is another matter.

Opening up the presentations today, Randy Mott, Executive Vice President and CIO, said there are more IT professionals worldwide than ever, and it still growing fast. Unfortunately, the lion's share of these folks are supporting the older systems, and not on innovation.

IT spend as a percent of revenue is too high, he said. And there are so many worthwhile IT objectives that need to be done at the same time. "There's no easy button here," said Mott. You can do anything, but you can't to everything, he told the IT executives and practitioners in the audience. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP has gone through an internal IT transformation since 2005, and learned a few lessons, he said. The goal was to cut the percentage of corporate revenue devoted to IT from 4 percent to 2 percent while still delivering better performance. That takes upfront investment but the can be recovered quickly, he said.

To do so you need to cuts costs and change the game, said Mott. HP built 6 massive new modern data centers over three years that indeed changed the game for HP, with 60 percent less electricity use. And those data center replace 85 previous facilities to serve 172,000 employees.

To "change the game" also requires better portfolio management, singular data views and marts, and change management.

The effective transition also requires "transformational" change over "incremental" change, said Mott. These projects need to happen in less then three years and have a significant cost reduction payback.

Next on stage was Ann Livermore, Executive Vice President, HP Technology Solutions Group, saying that CEOs and CIOs are not confident that current data centers will support their businesses past 2010. This is due to information explosion, more demands placed on IT, and reliance on aging infrastructure, she said.

HP now has the number one revenue share of servers spend worldwide, recently beating IBM in that role, said Livermore. One out of every three servers delivered around the world is an HP server, she said.

HP is the sixth largest software company in the world, she added, and the intent to buy EDS will push HP services into a much larger role and set of capabilities.

Transforming the data center builds on HP's core solutions of servers/storage, services, and software, she said. In the future all the IT assets will operate as a single virtual infrastructure, the next next generation data center, in effect, she said.

Livermore also highlighted two announcements this week, the HP Integrity NonStop NB50000c BladeSystem and the BTO Software for Change Management Automation news.

What about HP UX, asked some conference goes? Livermore said HP is committed to HP Unix and Integrity architecture. "We intend to play and play aggressively" on HP UX, she said.

Won't virtualization cut into hardware sales? asked another. Disruption is good for HP, said Livermore, because the management of a combined virtualized environment is the growth of the future, even as blades become the hardware staple.

How to compete against IBM? The EDS merger will allow HP to gain the services staff expertise, scale and outsourcing capabilities to meet the building demand for solutions.

Why EDS? Mark Hurd, HP Chairman and CEO, came on stage to answer. EDS will almost double the services market coverage HP can enjoy, as well as bring more services and automate the EDS services portfolio with HP software, said Hurd. He said he's excited about aligning the EDS and HP competencies.

He's still not sure when the EDS deal closes, however, said Hurd.

Next up, Paul Otellini, Intel president and CEO, came on stage to say how the Intel Itanium archtiecture momentum is building over Spac and PowerPC. Itanium is the fastest growing server chip platform globablly, says IDC, said Otellini.

Intel and HP also work together on total costs of servers, of which maintenance is the higher cost than acquisition cost by far. Intel and HP plan to bring even more powerful blades to market, with high-density, low-power and low-cost computing, said Otellini.

Hurd said the partnership between HP and Intel has never been deeper and wider. He specifically endorsed the HP commitment to Itanium family of processors.

HP marries change management and problem isolation functions into an automated data center efficiency partnership

Borrowing heavily from its Opsware, Mercury and Bristol acquisitions, HP on Tuesday at the Software Universe conference announced products and services designed to automate and coordinate two thorny aspects of large-scale IT operations: change lifecycle management, and problem isolation and resolution.

While once mundane and esoteric aspects of running monolithic data centers, today's scale, complexity and far-flung fragmentation from services orientation have elevated managing change across webs of servers and components to a top priority. What we're really getting at here is making IT perform like a mature, refined and managed business function, not a near-sighted firefighting brigade.

In many cases, operators and IT executives are forbidding that requested changes can be made to services and applications for fear that the changes will stir up hard-to-locate and tough-to-remedy glitches. Such unintended consequences can be scattered across thousands of distributed servers and IT network devices. It's hard to enjoy the fruits of service oriented architecture (SOA) investments for business agility when the IT infrastructure is too brittle to accept applications-level change readily.

So in anticipation of dampening further hamstrung agility advances due to "brittle environments" -- particularly as SOA, virtualization and cloud computing come into play -- HP's Business Technology Optimization (BTO) and research teams have assembled what amounts to IT change confidence enhancement tools. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

A recent study by The Economist research unit showed that some 50 percent of service outages in data centers was due to a change being made to a service. Then these problems were hard to isolate given that no one knew how the change impacted the distributed systems. From there, some 68 percent of companies responding said the applications issues were being tackled manually.

Obviously HP sees a huge opportunity here for making modern data centers behave more like assets and less like liabilities, at least in the eyes of business managers as they seek process and strategic initiatives changes.

So not only does the confidence to change services and processes freely amid SOA-supported processes need a boost, the ability to manage change requires automated lifecycle depth and breadth. Uncoordinated and manual attempts to manage change amid widespread complexity can actually make the problems and their resolution harder. Next generation data centers require an end-to-end services deployment and change management capability that maps to service workflow orchestration, services management and datacenter automation activities.

So the HP BTO and adaptive infrastructure engineers have designed HP Release Control 4.0, which identifies "change collisions" and creates a managed services approach to change. The solution not only manages technology, it manages the people managing the IT via providing advance visibility into change impacts and organizing teams so changes are coordinated.

The value does not end at the proactive stage of change management, but provides the tools to identify change-related issues over the lifetime of the services, said Sharmila Shahani, chief marketing officer of HP Software. "This provides a proactive, real-time and automated way to manage the change lifecycle," said Shahani, in an interview.

Additionally, HP has announced Business Availability Center (BAC) 7.5 for improved problem management in complex datacenter environments. Using new technology from HP Labs, the product helps isolate runtime problems before disruptions by allowing fast and visual "drilling down" into operations data regardless of scale and complexity.

Other new product releases here at Software Universe this week include: HP Client Automation Center 7.2, HP Storage Essentials 6.0, and HP Service Automation Reporter 7.0.

What's interesting to me is that HP is using he change management-focused Release Control functions in association with the BAC problem resolution functions, getting into a data center dance of efficiency. As the change piece and the problem identification piece are used in unison, a "closed loop" approach to datacenter performance amid constant change becomes possible, said Shahani.

I did a podcast interview just yesterday at the HP Technology Forum event in Las Vegas in these issues with Duncan Campbell, HP's Adaptive Infrastructure program leader. Have a listen. More podcasts from Software Universe are here.

Further burnishing the datacenter efficiency shine, HP has also updated its configuration management database (CMDB) system to embrace federation and ITIL v3 principles. Universal Configuration Management (CMS) 7.5 allows for many versions of configuration data from many sources to be used in unison for improved visibility and access for what's going on in as many of the systems as possible in nar real-time.

HP's latest CMDB does not force all the config data into a common CMDB, but rather uses connectors to other CMDBs for true federation on a meta data level, said Shahani, to provide a hub and consolidated view of all components within a large distributed system. Universal CMS 7.5 arrives this month.

HP is targeting the aggregated view of all systems elements value from the new CMDB at the burgeoning use of virtualization across datacenters. Virtualization promises utilization efficiencies and automated provisioning of services and applications support, but it also adds complexity as support infrastructure and application instances can pop in and out of use (existence?).

What I especially like about these new products is that they can increasingly be used in association with SOA governance and SLAs to begin to get to a true services lifecycle approach value and benefit. Used in association with HP SOA Center (including the Systinet repository), architects can integrate design and governance demands with change management, problem management and federated systems config data for a whole significantly larger than the sum of the parts.

It's just these kinds of complete services management capabilities, increasingly automated, that will make SOA pay off big dividends and paves the way for use of private cloud compute environs for enterprises and service providers alike.

Interview: HP's Duncan Campbell on energy efficiency and automation in next generation data centers

Listen to the podcast. Download the podcast. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

Enterprises are now energized to save energy, and HP's Adaptive Infrastructure program leader, Duncan Campbell, believes the path to automation and efficiency -- plus the need for modernization and consolidation -- present a '"perfect storm" for next generation data center architecture adoption.

I had a chance to interview Campbell yesterday at the Technology Forum event in Las Vegas after HP's NonStop Blade servers announcement. I asked him how the simultaneous factors of hardware improvements, virtualization, improved change management, and IT service management -- not to mention SOA and cloud computing -- can come together without overwhelming IT leaders and operators.

Listen to the podcast for more on HP's plans and philosophy on what the next generation data center and adaptive infrastructure approaches will means for lowering costs while also improving scale and response.

Incidentally, this is the first in a series of HP executive interviews as podcasts I'll be doing this week from the HP Software Universe conference. See the full list here.

Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Sponsor: Hewlett-Packard.

Disparate HP user communities unite under Connect banner at HP Technology Forum event

HP is an amalgamation of companies, products and technologies, and its user groups have had a similar legacy. Until today, that is.

Three major HP-focused user groups, from as long ago as Digital Equipment Corp. (DEC) and Tandem Systems days, have banded together to ride the power of social networking to provide a unified and more powerful voice to 50,000 global users managing and maintaining old and new HP products and systems.

The new group, called Connect, will allow its users to share knowledge and contacts while proving a strong customer advocacy voice to HP, said Nina Buik, president of the new non-profit Connect and a prolific blogger. She's also senior vice president at MindIQ, an Atlanta-based technology training company.

By officially banding together today, the former Encompass (once DECUS), HP-Interex EMEA and ITUG communities can gain more power and influence together while still remaining independent of HP.

"There's just more power in numbers, you can more done," said Buik.

Connect made a splash at the HP Technology Forum event, which began Monday in Las Vegas. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.] Users, members and observers toasted the advent of the group at a food and libations fest at the Mandalay Bay resort.

The Connect community reflects users of all of HP's portfolio, which covers a lot of ground from DEC's PDP apps still running in emulation in surprising numbers to the VMS and OpenVMS of old to the latest NonStop, BTO and SOA Center product suites. The unified community is at the outset strongest in the U.S. and EMEA, but will seek more presence in Asia/Pacific and Japan later this year, said Buik.

Connect will hold its next major user event Nov. 10-12 in Manheim, Germany.

Hey, while we're at it integrating communities -- just as we're integrating products and technologies -- why not go for some user and communities federation as well? The HP Software community Vivit, for example, or perhaps some open source communities would make sense to work in tandem with Connect. The large and growing VMWare community also has obvious synergies with Connect.

Furthermore, Connect is leveraging the social media and networks trend by creating what amounts to a LinkedIn or Facebook for HP users on its site at . Users can create a profile that describes their HP product sets, which then heightens their ability to reach out to other similar users and create their social user groups and relationships. There's blogs and wikis, too. If it works for social activities, it works for business activities.

HP is hoping to tap the Connect community for its own market research, a massive feedback loop perpetual focus group on the wants and demands of HP users. The power of the pen, folks -- it's even ore powerful when joined with social networks functions and viral community reach.

Monday, June 16, 2008

'Instant replay' helps software developers fast-forward to application problem areas

Fixing software bugs is often easier than finding them. Stepping up to the plate to address this problem is Replay Solutions, which today announced general availability of ReplayDIRECTOR for Java EE, a TiVo-like product that allows instant replays of applications and servers at any stage of the application lifecycle.

ReplayDIRECTOR, which was released in beta by the Redwood City, Calif. company in March, makes deep recordings of applications and servers -- notably non-deterministic inputs and events that affect the application. Engineers can then fast forward directly to the root cause of the problem.

The idea behind the technology is that it allows companies to drill down into source code quickly, eliminating unnecessary IT costs and time spent searching for issues that can't be replicated or easily detected. The software is designed to cut through the complexity that IT departments face with shorter release cycles, multi-tier applications, and dispersed development teams.

According to Replay Solutions, every line of code that an application executes while ReplayDIRECTOR is recording will be re-executed in precisely the same sequence during playback. No source code changes are required and recordings can be played anywhere, without requiring the original environment, inputs, databases, or other servers, all of which are virtualized during replay.

As virtualization becomes more common, these replay approaches may be necessary as instances of apps and runtimes may come and go based on automated demand response provisioning. These left-over breadcrumbs of what once happened in a virtualization container will be quite valuable to then prevent recurrences.

I'm sure innovative developers and testers will come up with other interesting uses, especially as apps and services become supported in more places, inside and outside of enterprises. Got compliance?

Designed to deploy in any environment and have a minimal effect on the environment, ReplayDIRECTOR allows applications to run at near full speed while recording and faster than full speed during re-execution. It also has minimal performance impact, and can run in a production environments as an "always on" solution.

ReplayDIRECTOR for Java EE is available now. You can find more information at the company's Web site.

Saturday, June 14, 2008

Kapow takes a jab at challenge of creating mashups from JavaScript and AJAX sites

Kapow Technologies, whose solutions helps companies assemble mashups by harvesting and managing data from across the Web, has enhanced its approach to overcome the obstacle many businesses encounter when targeting sources with dynamic JavaScript and AJAX.

The Palo Alto, Calif. company's Kapow Mashup Server 6.4, which it unveiled this week, features extended JavaScript handling, a response to the burgeoning number of AJAX-based Web sites. [Disclosure: Kapow Technologies is a sponsor of BriefingsDirect podcasts.]

The Web 2.0 Edition, one of four editions of the new Mashup Server, now includes support for Web Application Description Language (WADL), making it easier for applications and mashup-building tools to discover and consume REST services. The WADL support also helps developers leverage the Kapow Excel Connector, an Excel plug-in provided by StrikeIron.

The Portal Content Edition, which enables companies to refurbish existing portal assets, has several enhancements to the web clipping technology for development and deployment of JSR-168 standards based portlets. It now provides the ability to make on-the-fly changes to clipping portlets that enhance portal functionality, while adding a portlet deployment mechanism on major portal platforms such as IBM WebSphere, Oracle Portal and BEA WebLogic.

Last January, I did a podcast with Stefan Andreasen, founder and CTO of Kapow. Andreasen described the mashup landscape. You can listen to the podcast here or read the full transcript here. I also blogged last April about Kapow's Web-to-spreadsheet service. At that time, I said:

Despite a huge and growing amount of “webby” online data and content, capturing and defining that data and then making it available to users and processes has proven difficult, due to differing formats and data structures. The usual recourse is manual intervention, and oftentimes cut-and-paste chores. IT departments are not too keen on such chores.

But Kapow’s OnDemand approach provides access to the underlying data sources and services to be mashed up and uses a Robot Designer to construct custom Web harvesting feeds and services in a flexible role-based execution runtime. Additionally, associated tools allow for monitoring and managing a portfolio of services and feeds, all as a service.

In addition to the Web 2.0 Edition and the Portal Content Edition, the Kapow Mashup Server is also available in the Data Collection Edition and the OnDemand Edition.

All editions are available now. More information can be found on the Kapow Web site. Product pricing is based on a flexible subscription offering.

SOA Software, iTKO team up to offer SOA lifecycle management and QA

SOA Software and iTKO have teamed up to offer enterprises continuous management and quality assurance across the entire lifecycle of service-oriented architecture (SOA) applications.

The new offering incorporates the LISA Testing, Validation, and Virtualization Suite from Dallas, Tex.-based iTKO and Policy Manager and Service Manager from Los Angeles-based SOA Software. The two companies say the combined solution will provide protection across the entire design, development, and change lifecycle.

Among the benefits of the combined solution are:
  • Continuous compliance and quality automation from concept to production support for SOA, with LISA validation natively executed as part of the workflows within SOA Software Policy Manager.

  • Visibility into SOA policy compliance levels, with all tests, test results, endpoint data, and models viewed in a single repository.

  • An increase in the types of SOA policy that can be modeled and validated, ensuring reliable service level outcomes.

  • Service virtualization of endpoints, locations and binding properties from SOA Software combined with simulation of service behaviors and data from iTKO.

  • Enhanced runtime validation of live SOA applications for both functional and performance purposes.
The joint solution is designed to meet the needs of enterprises seeking to manage complex, heterogeneous service assets to ensure that business requirements are met, while mitigating the risk of inevitable change in underlying systems such as enterprise service bus (ESB)/messaging, databases, mainframes and other custom and legacy applications.

I took a briefing recently on LISA and was really impressed with the approach and value. It's worth a look if you're not familiar with iTKO.

Etelos puts more 'sass' into SaaS with four additional hosted Web 2.0 offerings

Etelos, Inc. has beefed up its software-as-a-service (SaaS) offerings with the addition of four Web 2.0 stalwarts to its Etelos Marketplace. Users can now take advantage of WordPress, SugarCRM, MediaWiki, and phpBB as hosted solutions from the San Mateo, Calif. company.

The new additions are designed to help enterprises, small businesses, bloggers, and individual users connect with customers and other online communities on an on-demand basis. Users can set up a blog or a wiki with nothing more than a browser and Internet access. Technical details are handled by Etelos.

Founded in 1999, Etelos has been a go-to place for open-source developers eager to get their apps into the marketplace without having to go into the software distribution business. It also provides one-stop shopping for businesses looking for those apps, offering common user management, billing, support, and security.

6th Sense Analytics adds new features for collecting development productivity metrics

6th Sense Analytics, which collects and provides metrics on software development projects, this week announced several enhancements to its flagship product. These enhancements provide a more user-friendly interface and organize reports into workspaces that more closely align with the way each user works.

The Morrisville, N.C. company targets its products at companies that want to manage outsourced software development. It automatically collects and analyzes unbiased activity-based data though the entire software development lifecycle. [Disclosure: 6th Sense has been a sponsor or BriefingsDirect podcasts.]

Among the enhancements to the product are:
  • Reports can now be scheduled for daily, weekly or monthly delivery by email, reducing the number of steps required to access reports, providing easier integration into customer work routines.

  • Users can now select specific reports providing the ability to see only the information pertinent to their needs.

  • The registration process has been streamlined. After inviting a new user to a team, the user’s account is immediately activated and the user is sent a welcome email that provides details for getting started including instructions for desktop installation. The action of removing users has also been simplified.

  • Reports are now relevant to any time zone for customers working with resources across a country and on multiple continents.
I've been following 6th Sense Analytics since they first emerged on the scene. Last year, I had a podcast with Greg Burnell, chairman, co-founder and CEO, as he explained the need for metrics in outsourced projects. You can listen to the podcast here and read a complete transcript here.

Last August, I reported on the first metrics that 6th Sense Analytics had released to the public. Those findings confirmed things that people already knew, and provided some unexpected insights. I saw a real value in the data:

And these are not survey results. They are the use data aggregated from some 500 active developers over past several weeks, and therefore make a better reference point than “voluntary” surveys. These are actual observations are on what the developers actually did — not what they said they did, or tried to remember doing (if they decided to participate at all). So, the results are empirical for the sample, even if the sample itself may not yet offer general representation.

Friday, June 13, 2008

OpenSpan to ease client/server modernization by ushering apps from desktop to Web server

Promising lower costs and greater control, OpenSpan, Inc., this week unveiled its OpenSpan Platform Enterprise Edition 4.0, which will allow organizations to move both legacy and desktop applications off the desktop and onto the server. This will allow them to be integrated with each other or rich Internet applications (RIAs) and expressed as Web services.

Key to the new offering is the company's Virtual Broker technology to enable the movement of the applications, allowing companies to rapidly consume Web services within legacy applications or business process automations that span applications. Companies can also expose selective portions of applications over the Web.

According to OpenSpan, of Alpharetta, Ga., the benefits of their approach include lower costs, because companies will have to license fewer copies of software, as well as giving IT greater control over end-user computing by centralizing application management on the server.

Moving applications off the desktop and onto the server means that companies no longer have to install and expensively maintain discrete copies of each application on every desktop. Users access only the application portion they need. This has the added benefit of reducing desktop complexity.

Yep, there are still plenty of companies and apps out there making their journey from the 1980s to the 1990s -- hey, better late than never. If you have any DOS apps still running, however, I have some land in Florida to sell you.

OpenSpan made a splash last year, when it announced its OpenSpan Studio, which allowed companies to integrate siloed apps. At the time, I explained that process:

How OpenSpan works is that it identifies the objects that interact with the operating system in any program — whether a Windows app, a Web page, a Java application, or a legacy green screen program — exposes those objects and normalizes them, effectively breaking down the walls between applications.

The OpenSpan Studio provides a graphical interface in which users can view programs, interrogate applications and expose the underlying objects. Once the objects are exposed, users can build automations between and among the various programs and apply logic to control the results.

OpenSpan Platform Enterprise Edition 4.0 will be available this year.

Wednesday, June 11, 2008

Live TIBCO panel examines role and impact of service performance management in enterprise SOA deployments

Listen to the podcast. Read a full transcript. Sponsor: TIBCO Software.

Myriad unpredictable demands are being placed on enterprise application services as Service Oriented Architecture (SOA) grows in use. How will the far-flung deployment infrastructure adapt and how will all the disparate components perform so that complex business services meet their expectations in the real world?

These are the questions put to a live panel of analysts and experts at the recent TIBCO User Conference (TUCON) 2008 in San Francisco. Users such as Allstate Insurance Co. are looking for SOA performance insurance, for thinking through how composite business services will perform and to ensure that these complex services will meet and exceed expected service level agreements.

At the TUCON event, TIBCO unveiled a series of products and services that target service performance management, and that leverage the insights that managed complex events processing (CEP) provides. To help understand how complex events processing and service performance management find common ground -- to help provide a new level of insurance against failure for SOA and for enterprise IT architects -- we asked the experts.

Listen to the podcast, recorded live at TUCON 2008, with Joe McKendrick, an independent analyst and SOA blogger; Sandy Rogers, the program director for SOA, Web services and integration at IDC; Anthony Abbattista, the vice president of enterprise technology strategy and planning for Allstate Insurance Co., and Rourke McNamara, director of product marketing for TIBCO Software. I was the producer and moderated.

Here are some excerpts:
We are describing what could be thought of as insurance. You’ve already gone on the journey of SOA. It’s like going on a plane ride. Are you going to spend the extra few dollars and get insurance? And wouldn't you want to do that before you get into the plane, rather than afterward? Is that how you look at this? Is service performance management insurance for SOA?

It’s interesting to think of [SOA service performance management] as insurance. I think it’s a necessary operational device, for lack of better words. ... I don’t think it’s an option, because what will hurt if you fall down has been proven over and over again. As the guy who has to run an SOA now -- it’s not an option not to do it.

I actually do look at service performance management as insurance -- but along the lines of medical insurance. Anthony said people fall down and people get hurt. You want to have medical insurance. It shouldn't be something that is optional. It shouldn't be something you consider optional.

It’s something that you need to have, and something that people should look at from the beginning when they go on this SOA journey. But it is insurance, Dana. That’s exactly what it does. It prevents you from running into problems. You could theoretically go down this SOA path, build out your services, deploy them, and just get lucky. Nothing will ever happen. But how many go through life without ever needing to see a doctor?

What we are seeing is that, as services are exposed externally to customers, partners, and other systems, it affects the ability to fail-over, to have redundant services deployed out, to be able to track the trends, and be able to plan, going forward, what needs to be supported in the infrastructure, and to even go back to issues of funding. How are you going to prove what's being used by whom to understand what's happening?

So, first, yes, it is visibility. But, from there, it has to be about receiving the information as it is happening, and to be able to adjust the behavior of the services and the behavior of the infrastructure that is supporting. It starts to become very important. There are levels of importance in criticality with different services in the infrastructure that’s supporting it right now.

But, the way that we want to move to being able to deploy anywhere and leverage virtualization technologies is to break away from the static configuration of the hardware, to the databases, to where all this is being stored now, and to have more of that dynamic resourcing. To leverage services that are deployed external to an organization you need to have more real-time communication.

With something like a Tivoli or a BMC solution, something like a business service management technology, your operational administrators are monitoring your infrastructure.

They are monitoring the application at the application layer and they understand, based on those things, when some thing is wrong. The problem is that’s the wrong level of granularity to automatically fix problems. And it’s the wrong level of granularity to know where to point that finger, to know whom to call to resolve the problem.

It’s right, if what's wrong is a piece of your infrastructure or an entire application. But if it’s a service that’s causing the problem, you need to understand which service -- and those products and that sort of technology won’t do that for you. So, the level of granularity required is at the service level. That’s really where you need to look.

A lot of the initiatives around ITIL Version 3.0 are starting to get some of those teams thinking in terms of how to associate the business requirements for how services are being supported by the infrastructure, and how they are supported by the utility of the team itself. But, we're a long way away from having everything all lined up, and then having it automatically amend itself. People are very nervous about relinquishing control to an automated system.

So, it is going to be step-by-step, and the first step is getting that familiarity, getting those integrations starting to happen and then starting to let loose.

We're dealing with organizations like Allstate, which have massive size and scale, with 750 services. What do people need to be considering, as we moving into to yet more complexity with virtualization, cloud computing, utility grids?

You need to make sure that, as you move from the older ways of doing things -- from the siloed applications, the siloed business unit way of doing things -- to the SOA, services-based way of doing things, you don’t ignore the new complexities you are introducing.

Don’t ignore the new problems that you are introducing. Have a strategy in place to mitigate those issues. Make sure you address that, so that you really do get the advantage, the benefits of SOA.

What I mean by that is with SOA you are reusing services. You are making services available, so that that functionality, that code, doesn’t need to be rewritten time and time again. In doing so you reduce the amount of work, you reduce the cost of building new applications, of building new functionality for your business organization.

You increase agility, because you have reduced the amount of time it takes to build new functionality for your business organization. But, in so doing, you have taken what was one large application, or three large applications, and you have broken them down into dozens or tens of separate smaller units that all need to intercommunicate, play nice with each other, and talk the same language.

Even once you have that in production, you now have a greater possibility for finger-pointing, because, if the business functionality goes down, you can’t say that that application that we just put on is down.

The big question now is what part of that application is down? Whose service is it? Your service, or someone else’s service? Is it the actual servers that support that? Is it the infrastructure that supports that? If you are using virtualization technology, is it the hardware that’s down, or is it the virtualization layer? Is it the software that runs on top of that?

You have this added complexity, and you need to make sure that doesn’t prevent you from seeing the real benefit of doing SOA.
Listen to the podcast. Read a full transcript. Sponsor: TIBCO Software.