Wednesday, December 19, 2007

A logistics and shipping carol: How online retailers Alibris and QVC ramp up for holiday peak delivery

Listen to the podcast. Or read a full transcript. Sponsor: UPS.

Santa used to get months to check his list and prepare for peak season, but for online and television retailers such as Alibris and QVC they need to take orders the make deliveries in a matter of days. The volume and complexity of online shipping and logistics continues to skyrocket, even as customer expectations grow more exacting. Shoppers routinely place gift orders on Dec. 21, and expect the goods on the stoop two days later.

For global shopping network QVC, the task amounts to a record peak of 870,000 orders taken in a single day -- more than three times typical volume. For rare book and media seller Alibris, the task requires working precisely across ecologies of sellers and distributors. For partners like UPS, the logistics feat requires a huge undertaking that spans the globe and requires a technology integration capability with little room for error at record-breaking paces.

Listen as innovative retailers Alibris and QVC explain how they deal with huge demands on their systems and processes to meet the holiday peak season fulfillment. One wonders how they do it without elves, reindeer, or magic.

Join Mark Nason, vice president of operations at Alibris, and Andy Quay, vice president of outbound transportation at QVC, as we hear how the online peak season comes together in this sponsored podcast moderated by Dana Gardner, president and principal analyst at Interarbor Solutions.

Here are some excerpts:
What we strive for [at Alibris] is a consistent customer experience. Through the online order process, shoppers have come to expect a routine that is reliable, accurate, timely, and customer-centric. For us to do that internally it means that we prepare for this season throughout the year. The same challenges that we have are just intensified during this holiday time-period.

Alibris has books you thought you would never find. These are books, music, movies, things in the secondary market with much more variety, and that aren’t necessarily found in your local new bookseller or local media store.

We aggregate -- through the use of technology -- the selection of thousands of sellers worldwide. That allows sellers to list things and standardize what they have in their store through the use of a central catalogue, and allows customers to find what they're looking for when it comes to a book or title on some subject that isn’t readily available through their local new books store or media seller.

You hit on the term we use a lot -- and that is "managing" the complexity of the arrangement. We have to be sure there is bandwidth available. It’s not just staffing and workstations per se. The technology behind it has to handle the workload on the website, and through to our service partners, which we call our B2B partners. Their volume increases as well.

So all the file sizes, if you will, during the transfer processes are larger, and there is just more for everybody to do. That bandwidth has to be available, and it has to be fully functional at the smaller size, in order for it to function in its larger form. ... These are all issues we are sensitive to, when it comes to informing our carriers and other suppliers that we rely on, by giving them estimates of what we expect our volume to be. It gives them the lead-time they need to have capacity there for us.

Integration is the key, and by that I mean the features of service that they provide. It’s not simply transportation, it’s the trackability, it’s scaling; both on the volume side, but also in allowing us to give the customer information about the order, when it will be there, or any exceptions. They're an extension of Alibris in terms of what the customer sees for the end-to-end transaction.

[For QVC] peak season 20 some years ago was nothing compared to what we are dealing with now. This has been an evolutionary process as our business has grown and become accepted by consumers across the country. More recently we’ve been able to develop with our website as well, which really augments our live television shows.

... In our first year in business, in December, 1986 -- and I still have the actual report, believe it or not -- we shipped 14,600 some-odd packages. We are currently shipping probably 350,000 to 450,000 packages a day at this point. We've come a long way. We actually set a record this year by taking more than 870,000 orders in a 24-hour period on Nov. 11. This led to our typical busy season through the Thanksgiving holiday to the December Christmas season. We'll be shipping right up to Friday, Dec. 21 for delivery on Christmas.

We’ve been seeing customer expectations get higher every year. More people are becoming familiar with this form of ordering, whether through the web or over the telephone. It's as close to a [just-in-time supply chain for retail] as you can get it. As I sometimes say, it's "just-out-of-time"! We do certainly try for a quick turnaround.

The planning for this allows the supply chain to be very quick. We are like television broadcasts. We literally are scripting the show 24-hours in advance. So we can be very opportunistic. If we have a hot product, we can get it on the air very quickly and not have to worry about necessarily supplying 300 brick-and-mortar stores. Our turnaround time can be blindingly quick, depending upon how fast we can get the inventory into one of our distribution centers.

We carefully plan leading up to the peak season we're in now. We literally begin planning this in June for what takes place during the holidays -- right up to Christmas Day. We work very closely with UPS and their network planners, both ground and air, to ensure cost-efficient delivery to the customer. We actually sort packages for air shipments, during critical business periods, to optimize the UPS network.
Listen to the podcast. Or read a full transcript. Sponsor: UPS.

Wednesday, December 12, 2007

ELC Technologies and FiveRuns join forces for Rails development

ELC Technologies and FiveRuns Corp. are joining forces in a strategic partnership designed to offer enterprises a broad scope of resources for Ruby on Rails, the Web application framework.

Both ELC Technologies, Santa Barbara, CA, and FiveRuns, Austin, TX, already have a strong presence in the Rails market. We're also glad to see RESTful Ruby.

ELC Technologies specializes in Rails-based business applications and agile software development practices. It counts such companies as Buy.com, Cisco, Live Nation, MediaTrust and TuneCore among its client base.

FiveRuns delivers tools that allow IT managers to monitor the performance of Rails applications and their underlying infrastructure in production environments.

The two companies are collaborating on enterprise Rails deployments, which will be announced at a later date.

Jonathan Siegel, founder and president of ELC Technologies, explained the rationale for the new partnership.

We have repeatedly demonstrated the value of Rails for business-critical applications to the global companies we have as clients. However, one of the greatest challenges our clients face is monitoring and maintaining Rails within large-scale enterprise environments. Working with FiveRuns will allow our clients to easily manage their Rails deployments using FiveRuns' tools--and to demonstrate for themselves that Rails can deliver enterprise performance as well as shortening time to deployment.
I'm seeing a lot of enthusiasm for Ruby on Rails in the enterprise, and it's beginning to pull out of the exotic niche category into more mainstream RAD use, as fellow ZDNet bogger Joe McKendrick points out.

Tuesday, December 11, 2007

Wind River's John Bruggeman on Google Android and the advent of mobile internet devices

Listen to the podcast. Or read a full transcript.

The Android open source mobile platform made a splash in October when Google announced it, along with the Open Handset Alliance (OHA). An Android software development kit (SDK) came on Nov. 12, and the first Android-based open source platform mobile phones are expected in mid-2008.

The impact of such a platform on mobile phones and carriers has been roundly debated, yet the implications for an entirely new class of mobile internet devices has received less attention.

In this podcast, John Bruggeman, chief marketing officer of Linux software provider Wind River Systems, digs in to the technical, business model and open source implications of Android and OHA -- but he goes a step further.

Android will lead, he says, to a new class of potentially free mobile internet devices (MIDs) that do everything a PC does, only smaller, cheaper and in tune with global mobile markets that favor phones over PCs for web connectivity. [Disclosure: Wind River has been a sponsor of BriefingsDirect podcasts.]

Listen as I interview Bruggeman on the long-term disruptions that may emerge from the advent of Android.

Here are some excerpts from our discussion:
What’s new [in Android] is the business models that open up, and the new opportunities. That’s going to fundamentally change the underlying fabric of the mobile phone space and it’s going to challenge the traditional operators' or carriers' positions in the market. It’s going to force them, as the supply chain, to address this. ... Carriers potentially are going to have to embrace completely new revenue and service models in order to survive or prosper.

Clearly, the great promise of the Google phone platform is aimed more at an ISP mentality, where they make money on how we provision or enable new services or applications. ... the traditional carrier is a more connection-based business model. You pay for connection. This model will clearly evolve to be some sort of internet model, which today is typically an ad revenue-share model. That’s how I see OHA will play out over time. We’re going to have to adopt or embrace an ad revenue-share model.

There might be revenue that’s derived through connectivity, but increasingly we're seeing the big money around the monetization of advertising attached to search, advertising attached to specific content, and advertising attached increasingly to mobile location and presence.

I don’t think that the extreme is that improbable, that the actual connection price would go down to zero. I could have a mobile phone and pay a $0 monthly fee. ... The ad revenue is where the real dollars are here, as well as all the location-based value that you can do. This is the true delivery on the promise of the one-to-one marketer's dream. You’ve got your phone. I know exactly where you are.

It would be naïve to say the technology issues are completely solved, but I think a lot of the hard problems are understood, and there is a path to solution. Those will play out over the next 12 months. I see a clear road to success on the technology side. It will be easier for the technologists to overcome the obstacles than it will be for the business people to overcome the new models in an open source world.

There’s going to be a lot of pressure to drive down that connectivity price really quickly. I say that because I think you can’t ignore the overtones of Google being willing to buy their own bandwidth and become their own carrier. That threat is out there. As a carrier, I've either got to embrace or fight -- and embrace seems most logical to me.

These devices, the converged mobile device in particular -- something like an iPhone -- strikes me as a stepping stone between a traditional PC, as we know it, and some of these mobile devices.

If I can get a lot of what I get through the PC free or low-cost through one of these mobile devices, the only real difference is the size of the monitor, keyboard, and mouse. Isn’t there an opportunity in two, three, or four years that I might say, “I don’t need that PC and all that complexity, cost and so forth. I might just use my mobile device for almost all of the things I do online?"

PC manufacturers and those that are the traditional part of that supply chain are threatened by that every day now. You've hit it on the head. There’s an emerging market. Maybe the most important technology market to observe right now is the mobile Internet device (MID).

Many analysts are starting to pick up on it, and it could be viewed as the next generation of the mobile phone. But I think that’s underselling the real opportunity. If you look on the dashboard of your automobile, the back of your airplane seat, everywhere you go and everything you touch, it is a potential resting place for a MID with a 4x6 screen or a 3x5 screen, or all different kinds of form factors. That kind og use gives you the experience that is the eventual promise of the Android platform.

We all should start thinking about and talking about the MID market pretty quickly. ... The pie that we're defining isn’t really just mobile internet or voice, presence, and mobile commerce. It’s really the whole internet.

The first thing is we need to get some Android-based phones out there. Some time next year, you're going to see the first phones and that’s when we're actually going to have to see the operators who offer those phones address all the business model issues that you and I've have been talking about today.

So the next big step is that it’s got to move from the talk about to the reality of "here are the phones," and now we're going to have to resolve all these issues that are out there. That's not years away -- that’s next year.
Listen to the podcast. Or read a full transcript.

Monday, December 10, 2007

Red Hat unveils JBoss Developer Studio -- is it destined for an Amazon or IBM cloud?

Red Hat, Inc., Raleigh, NC, has finally released JBoss Developer Studio, an open-source Eclipse -based integrated development environment (IDE) that combines tooling with runtime.

Red Hat released the beta version for free download on JBoss.org last August and said that the final subscription version would be available "later this summer." Since the beta was made available, according to Red Hat, there have been over 50,000 downloads.

Designed to allow enterprises to be more agile and to respond more quickly to changing business requirements, Developer Studio eliminates the need to assemble IDEs. It's built on the Eclipse-based developer tools contributed to Red Hat by Exadel in March and introduced under open source in June, The Exadel products contributed to the project included Exadel Studio Pro, RichFaces, and Ajax4jsf.

JBoss Developer Studio incorporates Eclipse tooling, integrated JBoss Entrperise Application Platform, Red Hat Enterprise Linux for development use and full access to Red Hat Network. Also included are tooling for technologies, including JavaEE, JBoss Seam, Ajax, Hibernate, Persistence, JBoss jPBM, Struts, and Spring IDE.

Developer studio is available by subscription for $99.

On a modestly related note ...

You'll recall that Red Hat also made news when it announced its runtime-as-a-service on the Amazon EC2 cloud. Now ... I wonder, perhaps these tools could emerge as an IDE as a service placed up on Amazon, to deploy to RHEL runtime instances. Wow, could make a very cool combo.

Folks like Coghead and Bungee Labs are already making waves with development and deployment as a service. And Amazon ought to bringing tools -- not just platform -- through it's pay-as-you-drink hosting offerings sometime soon. Genuitec certainly has its eyes on this model, and is well-placed for it with MyEclipse.

So who will be the one to move their tools environment to the Amazon cloud first? Perhaps Amazon will offer several tools options, such as one for web apps and mashups, and another (or two) for Java development? You know you want to, Jeff.

I'd like to see a way for .NET developers to get such a Visual Studio-as-a-service (or open source Mono thing equivalent) up on Amazon EC2, to then force Microsoft Live to follow suit. Who needs tools licenses anymore then?

How about the IBM cloud? What tools might go well there? Can Sun Microsystems resist following suit with NetBeans-as-a-service? A New Hope.

Tools in the clouds. Luke, it is your destiny!

Federated ESBs come to fore as natural outcome of guerrilla SOA practices

Some IONA Technologies announcements today point up the growing practice of multiple ESBs within enterprises, often associated in a federated manner, and sometimes using ESBs tasked with specific types of integration duties.

IONA is taking a "hybrid" approach to ESB offerings, with a coordinated open source and commercial strategy. [Disclosure: IONA has been a sponsor of BriefingsDirect podcasts.] IONA Technical Director Jim Strachan addresses some of the open source issues here. And IONA has also upgraded its Artix ESB, and has partnered to bring a management dashboard benefit to the mix.

These moves reflect how enterprises and service providers are using ESBs in innovative ways, in effect creating distributed ESBs to support SOA, SaaS and guerrilla SOA -- while building a path to holistic SOA that follows a crawl, walk run ramp-up.

Indeed, some new use traits are emerging on how ESBs are actually being used in the market. One is that multiple ESBs are often used, or come into play, rather than one honking ESB swallowing everything up. Sometimes such varied ESB use comes from different SOA projects that evolved using different means to access and integrate resources. Sometimes it's from separate organizations or departments that merged or became partners. SaaS also encourages ESBs at the edge and internally, so there's likely a mix there too.

I'm also seeing instances of ESBs that are tuned or dedicated to specific types of integration, or integration that is "pointed," if you will, in a specific direction. By that I mean integration for data services, unstructured content, or integration for management feeds, or integration from outside partners of supply chains.

An ESB for various flavors of integration makes a ton of sense to me. Deciding later whether to consolidate ESBs, while learning what works best on a more granular level, follows how IT often evolves. It certainly aligns with open source infrastructure use adoption patterns.

Given these scenarios, rather than force an architect to pick or choose one ESB and make it dominant, we just as often now see federation of several ESBs. Due to their nature, this makes sense -- many integration points. So hybrid ESB use makes sense and is reflective of what's going on in actual use. Another aspect is that ESBs are not just federated on equal footing. An ESB can be used in master and slave configurations, where various architectural topologies are likely given the many possible ways that these SOAs emerge. Old and new can play well based on many types of integration means. Think of it as distributed integration.

In this environment, IONA is offering a logical hybrid solution set. On one hand, FUSE allows for the benefits of open source and community development to make ESBs inclusive and standards-based. And the community provides a great way for many connectors and modules to well-up to bring even more assets and resources into play with the ESB. This makes it far easier for esoteric applications and content to play in an SOA, and those connectors are made openly available.

In this open source role for an ESB, Metcalfe's Law (value of network grows with number of participants on it) applies too. The value of the ESB increases with the number and diversity of assets and resources that can attach to it. FUSE aims to exploit this, as well as provide a low-cost and simpler way for developers to enter into ESB use.

On the other hand, in legacy-rich and CORBA environments, the IONA Artix offering binds and integrates core and more traditional messaging and ORB-based assets and resources. So you have a backward-facing and legacy-compatible ESB offering, one that scales to large transactional demands in Artix. And you have the new kids on the block, Web services and SOA greenfield services that can be accessed and organized via FUSE and the Apache community.

Putting FUSE and Artix 5.1 into a federated yet managed configuration then offers the best of many worlds, and gives organizations variety of choices on how to enter and manage the expansion of SOAs, based on their specific situations. And this also mitigates future risk by making unknown scenarios -- including more SaaS use -- easier to meld into the architecture.

IONA's partnering with Hyperic for FUSE HQ broadens the management into mature consoles-based delivery, while also expanding the scope of what is managed. So that makes sense. All in all, an approach to the market that makes market adoption and inclusion more in tune with guerrilla SOA than master plan SOA based on one vendor or one product set.

In other SOA news today -- again with an open source angle -- WSO2 announced an open-source approach to help users pass consistent identity data across networks, while protecting them from such things as phishing and other identity attacks in Web applications.

Web services middleware provider WSO2, based in Colombo, Sri Lanka and Mountain View, Calif., recognizes that as SOA becomes more prevalent and more complex -- linking data, applications and service both inside and outside the enterprise -- that security and authentication become prime concerns. [Disclosure: WsO2 has been a sponsor of BriefingsDirect podcasts.]

The WSO2 Identity Solution (IS) is based on Microsoft's CardSpace technology, which is built on the open standards Security Assertion Markup Language (SAML) and WS-Trust. WSO2 IS will operate with CardSpace components from multiple vendors.

It also works with current enterprise identity directories, such as those based on the Lightweight Directory Access Protocol (LDAP) and Microsoft Active Directory.

WSO2 IS has two primary components: Identity Provider and Relying Party Component Set. Identity Provider allows users to issue "cards," both managed information cards and self-issued cards that allow users to log into any Web application that supports CardSpace. Web sites that rely on this authentication don't need to store any passwords or other personal details.

Key features of Identity Provider include:

  • User store support for the most common directories that offer standard LDAP or Java Database Connectivity interfaces. It also includes a built-in user store for smaller companies.
  • Claim support for standard and custom claims, so users can keep full control over what personal information is shared.
  • Statistics, reporting, and an audit trail, which let administrators monitor user accounts and issuances of information cards.
  • Revoking mechanism, which allows administrators to revoke user cards and block them from being used for authentication.

The Relying Party Component, build around the Apache HTTPD module (mod_cspace), plugs into the most common Website servers to add support for CardSpace authentication requests. The module is independent of any server-side Web framework, and it can set up CardSpace authentication with any Web framework running on Apache 2, including PHP, Python, and Perl applications.

Key features of the Relying Party Component include:

  • Access control for static content in Apache HTTPD
  • An integration interface for developers
  • Support for leading content management frameworks, such as Drupal and MediaWiki
  • A Java servlet filter to provide an integration point for J2EE-based Web applications.
WSO2 also recently upgraded its open source ESB.

Friday, December 7, 2007

ZoomInfo offers more evidence of a 'controlled circulation' advertising benefit quickly emerging on the web

Get ready for new "controlled circulation" models on the web, ones that target you based not on your preferences for music or soft drinks -- but on what you consume in your occupation. Think of it as B2B social networking.

First, some set-up ... One of the great media inventions of the mid-20th century was the notion of affinity-based, controlled circulation publishing. Those creating magazine titles that catered to defined groups -- rather than mass media volume plays like network television -- went granular.

By focusing on concentrated audiences, these publishers walled up "universes" of buyers that passionately sought specific information as defined by discrete hobbies or occupations. Bill Ziff Jr. honed in on the hobbies, and grew a media empire on titles that linked up dedicated buyers -- of things like electronics kits, models, automobiles (and the jackpot, personal computers) -- to the sellers of the actual goods behind the passion. The ads inside these special interest pubs generated high premiums, based on the tight match between engaged (and well monied) buyers and drooling sellers.

Norm Cahners took the model in the direction of industrial business niches. He provided free monthly magazines based on slices of industrial minutiae that delivered useful albeit dry information to those specifiers of myriad corporate goods and services. You order gizmos for your buggy whips? You probably spend millions of dollars on procurement per each kind of good per year. Let me introduce you to some sellers of those goods who want to make you a deal.

The Cahners Publishing magazines -- on things like plastics use, integrated circuits developments, materials handling and design engineering -- were free to readers, as long as those readers identified themselves as corporate decision makers with budget to spend. Again, high ad premiums could be charged by linking engaged readers (with huge annual budgets) with advertisers who needed reach hard-to-find and shifting groups of corporate buyers.

Soon the burgeoning lists of these readers, sliced and diced by buying needs, and sanctified by audit bureaus as valid (mostly), became very, very valuable. As a controlled circulation publisher, if you had the top one of two monthly magazine titles that generated the definitive list of those buying all the industrial values, say, in North America -- you were sitting pretty. You controlled the circulation, defined and refined the audience, and so told the sellers how much they needed to pay you to reach those buyers. You priced high, but still less than these sellers would need to spend to send a warm body carrying a bad into each and every account (on commission).

In effect, the controlled circulation publishers collected straight commissions on billions of dollars in commercial and special interest goods being bought and sold. They were a virtual sales team for all kinds of sellers. Editorial was cheap. Life was good.

And then 10 years ago the Web came along and pretty much began to blow the whole thing apart. Engaged users started using Web search, and explored their vendors' web sites on their own. Vendors could reach users directly, and used their websites as virtual sales forces too. Soon there were wikis that listed all the sellers of goods in certain arenas of goods and services. Those seeking business or hobby information could side-step the editorial middleman and go direct to the buying information on goods and services they wanted. We're only into the opening innings on this, by the way.

But the same disruption that plagues newspapers like the San Jose Mercury News and The Boston Globe -- both of which should be doing great based on their demographic reach -- is undermining the trade media too. It's the web. It's search. It's sidestepping the traditional media as a means to bind buyers and sellers. The web allows the sellers to find the buyers, and the buyers to find the sellers with less friction, less guessing, less cost. Fewer middlemen.

And this means the end of controlled circulation has we have know it. ... Or does it?

Just as the web made has made it a lot harder for media companies to charge a premium for advertisers to reach a defined universe of some sort, the web could also allow for a need breed of controlled circulation, one that generates "universes" on the fly based on special interest search, not based on special interest magazines.

The current web ad model has evolved to be based on blind volume display ads, with the hope of odd click-throughs, usually of less the 0.5 percent of the total banner ads displayed. Advertisers know exactly what their ad dollar gets them, and it's not enough. Even when seekers click on ads, they usually get sent to a home page that was just as easily reached through keyword searches from a web search provide (for free), based on their real interests. Enter Google. And you know the rest.

Why the history lesson? Because we're now beginning to see some new variations on the controlled circulation theme on the web that create additional models. Controlled circulation could be back. It that could mean much bigger ad bucks than web display ads or even key-word-based ads can generate. It's what has Microsoft gaga over Facebook. And News Corp. gaga over MySpace. And Viacom beside itself because it has no such functional base yet.

Controlled circulation is coming to the web on one level via social networks, mostly for consumer goods and services -- sort of what Bill Ziff did for hobbyists in the 1950s and 1960s. Social networks like Facebook and MySpace endear their member users to cough up details about themselves -- just like controlled circulation publishers used to require for readers to get free magazines on specific topics. Based on the need to expose yourself on a social network to get, well ... social ... you therefore provide a lot of demographic details that can then be carved up into the equivalent of controlled circulation universes. Based on your declared consumer wants, fad preferences, age and location, you give advertisers a means to target you.

This model is only just now being probed for its potential, as the Beacon trial-and-error process as Facebook these days attests. Soon, however, an accepted model will emerge for binding consumers and sellers of goods and services, a model better than banner ads, one that can go granular on user preferences (but not too granular, lest privacy bugaboos rear their paranoid heads). When this model is refined, everyone from Microsoft to Yahoo to Google and Time Warner will need to emulate it in some fashion. It will be the third leg on the web ads tool: display, search-based, and now reader-profile constructed controlled circulation.

Which brings me to ZoomInfo. (Disclosure: ZoomInfo has been a sponsor of some BriefingsDirect B2B podcasts and videocasts that I have produced). What's so far missing in all of the Facebook hysteria is the Norm Cahners part, of how to take the emerging controlled circulation web model and apply it to multi-trillion dollar B2B global markets. How to slice and dice all the companies out there with goods and services you -- as a business buyer -- need to know about? Instead of the users giving up profile information on themselves as a way of providing profile-constructed controlled circulation, why not let the companies provide the profiles that the users can access via defined searches based on their actual needs?

Wade Rouch over at Xconomy gives us a glimpse of this model based on what ZoomInfo is now doing with "business demographics" or what Zoom calls Bizographics. This is the B2B side of what social networks are doing on the consumer side, but with a twist. By generating the lists of businesses that provide goods and services sough via a search, and even more lists of the goods themselves, users can educate themselves and the bond between B2B buyers and sellers is made and enriched. All's that's needed is the right kinds of searches that define the universe of providers that users can then explore and engage with.

ZoomInfo is but scratching the surface of what can be an auspicious third (but robust) leg on the B2B web knowledge access stool. By satisfying both seekers and providers of B2B information on business needs, ZoomInfo can generate web page real estate that is sold at the high premiums we used to see in the magazine controlled circulation days. Occupational-based searches for goods, information, insights and ongoing buying activities is creating the new B2B controlled circulation model.

What's more, these defined B2B universes on the fly based on occupations and buying needs amounts to giving more power to the users via what Doc Searls correctly calls Vendor Relationship Management. It's a fascinating concept we'll be seeing a lot more of: Matching buyers and sellers on the web based on their mutual best interests. Mr. buyer, please find Mr. Seller -- on your terms, based on your needs.

Monday, December 3, 2007

More hints that IT systems analysis and on-demand models are coming together

The hot (albeit not necessarily sexy) segment of IT operations -- the analysis and intelligence-gathering from logs and performance management data -- is showing increasing signs of an on-demand future.

First, Paglo came out last month (in beta) with a free and open source (GPL) crawler service that scours the reams of log files and other electronic records users point it at inside of data centers and server farms. With a subsequent index, IT operators can view on and search for needed analysis and metrics of IT use and performance data as an online service via browsers.

Paglo provides IT administrators and operators the free crawler service to gain information or meta data on all sorts of assets on their networks, including across VPNs to remote offices. As an open source crawler, folks are free to write scripts to search into various modules and whatever else they want to gather data from on their networks. Other users can then benefit from these scripts via the community. Pretty quickly the Paglo community ought to be able to index just about anything of import on their networks. No cost incurred for users but their time involved.

The meta data then -- they assure me, safely -- is sent to an index instance in the cloud managed by Paglo. The managers of the crawler and hosted data can then securely search the logs using all sorts of queries, charts, views, and dashboards to gather quantitative and qualitative business intelligence on their IT systems use and use patterns.

The analysis can initially help with such chores as determining how many Microsoft Office suites are actually in use, or how to do quick audits of this or that element on a network. This can help with audits, to identify straggler application installations and to track down when users have installed things they should not. But later, the service could spawn premium services for operations analytics and troubleshooting.

Furthermore, by aggregating and (one hopes) anonymizing the data from many IT sites, Paglo could create definitive market research on just what constitutes IT use and context based on just the facts, ma-am. Rather than rely on quasi-annual surveys by IT analyst firms (always on the vanguard of objective results), a broad Paglo audit of large swaths of IT use and habits -- based on valid and scientific samplings (if not actual empirical censuses) -- could take the guess work out of what IT is actually being used in certain types of companies, and regions. That would be some mighty fine data, and could hold the IT vendors' feet to the fire on their real penetration and use patterns.

But I can see where this can go much further. Views and queries can show exactly what is beng used and in many respects how. Also, Paglo can then aggregate that across many user sites and types of users to draw empirical, statistically relevant determinations of what is being used in the field. Compare and contrast between verticals, SMBs and enterprises, regions and/or geographies.

If Paglo gets any kind of volume adoption and the data is good and comprehensive, we could end up with a comScore for IT components and infrastructure bits. Perhaps Paglo will make its money from selling the use patterns and market share data, while giving away the means to the tactical analysis for each company. So far they are mum on where their remuneration will come from.

Suffice to say, such a service will generate a lot of page views that only an IT systems administrators could love. That in itself could spell advertising gold for those selling to IT shops.

And, hey, free insight into IT ops -- as long as you feel okay about someone else's crawler sniffing around your network and servers -- could be an offer some cheapo outfits can't refuse. If the CIO won't pay for analytics products, what else could an operations manager do to prevent those awful Monday mornings.

On another IT analysis front, LogLogic announced today that longtime IT infrastructure thought leader Pat Sueltz has joined as CEO. Pat has been marching upward in title (while perhaps sliding a bit in employer size) over the past seven years. You may recall Pat as the gal who managed the Java relationship for IBM, back when Sun Microsystems and IBM saw eye to eye, at least on a common foe: Microsoft.

Then Pat went to Sun -- after making a lot of noise at IBM on why Java ought to be overseen by a standards body (if not open source). And this back in mid-1990s. After a stint at Sun in charge of software (not great timing it turns out) and then Sun services, she did a well-timed stint at Salesforce.com. And there lies the rub on the intersection of LogLogic and SaaS and on-demand models.

She won't commit, of course, this being her first week on how on-demand and LogLogic come together. But I'll wager a new chapter of growth potential for LogLogic lies in some of the interesting things Paglo has been trying, not to mention following the Salesforce ecology thing. There's also Splunk and what it has done with an online open repository of analytics data, know as SplunkBase. [Disclosure: Splunk has been a sponsor of BriefingsDirect podcasts.]

Pat comes to LogLogic from SurfControl, where she was CEO. I'll be keeping an eye on Pat, with keen interest on how research, trends, data and online business models come into play with the perhaps no longer esoteric log file management arena. I'm also looking for real business intelligence as applied to IT, culled from this log data. Between those values and the compliance imperatives, this is a high-growth area.

In other words, there's gold in them thar logs.

Wednesday, November 21, 2007

We'll see more acquisitions that meld telcos with IT vendors

News from London that Deutsche Telekom may make a bid to buy IT services giant EDS. This is only the opening volley in a forthcoming period of acquisitions that meld telcos with IT vendors.

I recently suggested that BEA Systems, as it spurns Oracle's initial bid, may also be a good fit for a large teleco such as AT&T. Much of the same logic I applied to a BEA-telco mashup works for a Deutsche Telekom and EDS marriage.

The fact is that IT vendors -- be they code/systems providers or systems integrators (or both) -- are becoming more like service providers. We see evidence of this with IBM's recent Blue Cloud announcement, the go-to-market match-up between Red Hat and Amazon, and also the way that many new startups are entering the field -- as services -- such as Paglo this week. (Look for a separate blog on Paglo soon.)

The fact also remains that telcos and mobile services providers are increasingly becoming IT providers, either directly or as integrators or aggregators of IT functions that they then deliver to their customers -- both B2B and B2C. Enterprises will enjoy efficiencies in buying business services from a single entity when that organization can combine the IT, network, integration, communications services, outsourcing and software. Who or what best combines these features for the best business-cost benefit, is the $100 billion question.

The value-add to enterprises on IT increasingly comes from the integration, services provisioning and services ecology partnerships, not from the code base or hardware differentiation. Virtualization, open source, and SaaS will hasten this irreversible course. And when everything is a TCP/IP-driven function or asset, why not merge, mash, and package it all up with bright red bow and lock it into a big multi-year services contract?

And, of course, we're now also on the downward slope of a massive IT supplier consolidation era (most notably among software vendors). Some even call it the end of best-of-breed. I'm not sure it's the end of best-of-breed, there will always be standalone functions and/or applications and services that come to markets to meet new needs.

As Peter Zotto, CEO of IONA Technologies, recently told me:
As 'middleware' vendor consolidation continues, big propriety stacks will get bigger, more expensive and more complex-and the speed of innovation will decline. This is the exact opposite of the potential of SOA. "Anti-stack" vendors, like IONA, that deliver industry-standard middleware technology for performance-demanding SOA environments are
already benefiting customers looking for lower-cost and easier-to-deploy software. This is just the beginning of a new innovation cycle kick-started by industry consolidation. (Disclosure, IONA has been a sponsor of my BriefingsDirect podcasts.)
But clearly the larger vendors -- Oracle, IBM, SAP, HP, Microsoft, et al -- have gotten even larger via consolidation, and are closer to providing a full set of IT offerings, with varying degrees of actual deep and meaningful integration. As they become more like service providers these bulked-up vendors actually drive ecologies of ISVs and providers, and -- just like a telco -- manage the customers on one end, and the supply chain participants on the other.

So when you associate and explore the consequences of these trends, it points to more types of mergers along the lines of Deutsche Telekom and EDS, or even BEA and AT&T.

The telcos had better not wait too long as the are buying or being bought. They will eventually be competing with a class of consolidated vendor/suppliers that have traditionally moved more quickly and better than the telcos in their best days. There will only be a handful of these behemoths bestriding the globe (until and if decentralization again appears?).

Indeed, if the telecos wait too long, or make the wrong acquisitions, they might lose that customer relationship altogether. And where would they be then, especially as new networks based on new wireless technologies appear?

One aside: Watch how Cisco Systems moves on this. I predict some interesting mergers involving Cisco and large network/services providers in 2008.

Friday, November 16, 2007

Open Group aims to make IT architects 'distinguished'

The Open Group, a vendor- and technology-neutral consortium, has taken certification to a new level with the announcement of its Distinguished Certified IT Architect designation within the IT Architect Certification program (ITAC).

As enterprise IT moves into new, uncharted waters -- especially the area that encompasses services oriented architecture (SOA) -- one of the chief concerns has been the availability, or lack of availability, of the trained and experienced architects who are necessary to make the vision a reality.

Begun two years ago, ITAC, a peer-review process, has already certified over 2,000 architects from some of the largest names in global enterprises. The new level of certification will require that individuals demonstrate a history of significant impact to the business through the application of IT architecture.

[Disclosure: I recently moderated an Open Group panel.]

The Open Group had already set the bar pretty high for architects certified at the basic level. Steve Nunn, the group's COO, told BriefingsDirect in a round-table discussion last March that one of the initial steps for certification was compiling a resume, and, in some cases, that has amounted to a 52-page document.

The core attributes expected from the Distinguished Certified IT Architect include:

  • Executive level communication skills
  • Responsibility for significantly complex architecture engagements
  • A demonstrated architectural vision for key business initiatives
  • Governance expertise

The new certification provides for three distinct career paths: chief/lead architect, profession executive, and enterprise architect.

A great deal of will power and leadership charisma will be required to make inroads toward SOA benefits.

This means that the architects of SOA must be as much evangelists and consensus-builders as technologists. They must be trusted and absolutely respected. Pointy-haired bosses ala Dilbert need not apply.

SOA architects must also balance short-term business outcomes with longer-term objectives aimed at maintaining quality and maximizing IT value. Too often architecture has been focused on discrete initiatives or infrastructure projects, such as server architecture or network architecture, rather than the broader IT perspective.

The concept of total architect also jibes well with Total Architecture, a topic I explored in a recent podcast with Dr. Paul Brown, author of “Succeeding with SOA: Realizing Business Value Through Total Architecture.”

Latest 'The Group' podcast delves into Google Android, Yahoo's China syndrome, and Facebook gestures (again)

Steve Gillmor's The Gang debuts its second coming for the second time. There's always good tibits and chunky nuggets in these roundtable gab-fests.

As usual the topics straddle places more weight on the Web 2.0 side than the IT side, but I'm working on it.

Steve's guests this week include, your's truly, Jason Calacanis, Sam Whitmore, Mike Arrington, Dan Farber, Mike Vizard, Robert Anderson and New York Times Bits columnist Saul Hansell.

This is Calacanis's last appearance on The Gang, so get him while you can. I forget who he impersonates this time, might be Marc Canter again or Don Kirshner, I'm not sure.

Go to Facebook and join The Gillmor Group, if Steve let's you. So far he seems uncharacteristically friendly. It can't last.

Thursday, November 15, 2007

IBM's 'Blue Cloud' signals the tipping point for enterprise IT into services model

I recall a front page story I wrote for InfoWorld back in 1997. At the time there were still plenty of naysayers about whether websites were a plaything or a business tool. There was talk of clicks and mortar, and how the mortar would always determine business outcomes.

And then General Motors -- the very definition of a traditional big business -- unveiled an expansive website that fully embraced the Internet across its businesses. We at InfoWorld wrote about GM's embrace of the Web then as a corporate tipping point, from which there was no going back. Clicks became mainstream for businesses. Case closed.

And so it is today, with IBM's announcement of Blue Cloud -- an approach that not only talks the services talk, but walks the services walk. We are all at the tipping point where IT will be delivered of, by and for services. If Google, Yahoo!, Amazon and eBay can do what they do with their applications and services, then why shouldn't General Motors? Or SMB XYZ?

So the king of mainframes and distributed computing moves the value expectations yet again -- to the pre-configured cloud architecture. The standards meet the management that meets the utility that gets the job done faster, better, cheaper. Slap an IBM logo on it and take it to the bank.

The future of IT is clearly about the efficiencies and agility of the grid/utility/Live/fabric/cloud/SOA/WOA thing. There can be no turning back. I believe Nick Carr is coming out with a book on this soon, The Big Switch: Rewiring the World, from Edison to Google, and IT is by no means irrelevant this time.

IBM's Blue Cloud, arriving in the first half of 2008, will use IBM BladeCenter servers, a Linux operating system, Xen-based virtualization and the company's own Tivoli management software. Nothing about this is terribly new. Sun Microsystems has been talking about it for years. HP is well on the way to making it so, given its Mercury and Opsware acquisitions. Citrix has an eye on this all too. Red Hat has its approach. Amazon is game. Google is riding the wave. Even Microsoft has hedged its bets.

But the tipping point comes when IBM's global clout in the major accounts is brought into play. The sales force will feel The Force, Luke. IBM will march in and let your IT services architecture mimic the service providers' basic set-ups too. You gain the ability to integrate your internal services with those of your partners, customers, suppliers, vendors and providers. Next will come an ESB in the cloud, no? This makes for a fertile period of innovation.

Perhaps IBM will also cross the chasm and host their own services -- not applications per se, but commodity business functions that ISVs, providers, and companies can innovate on top of or in addition to. Google has maps, but IBM has payroll, or tax returns, or purchasing. Could be quite interesting. I would expect IBM to offer ads in these services too some day (come on, Sam, it's not so bad).

And that also means you'll be provisioning IT internally and externally as subscription services. Charge-backs and IT shared services models become the standard models across both supply chains as well as value-added sales activities. Businesses will determine their margins based on the difference between what they pay for IT services (internal or external) plus the cost of the value added services -- and then what they charge on the receiving end. High-volume, recurring revenue, fewer peaks and troughs.

This is really the culmination of several mega trends in two major areas: IT and economics of online commerce. The trends that support this on the IT side include virtualization, high-availability clustering, open source platforms and tools, industry standard multi-core hardware, storage networks, Java middleware, WAN optimization, data services and federation, scripting language maturity -- as well as application consolidation and modernization, datacenter unification, low-energy-use dictates, and common management frameworks. The result is something like Blue Cloud.

The online economics trends include ecommerce, advertising supported Web services/media/entertainment, pay as you use services and infrastructure as a service, and - of course -- free code, free tools, free middleware, free stacks. It's all free -- except the service, maintenance, and support (otherwise known as a subscription).

And if one major corporation buys into IBM's Blue Cloud and they deploy in such a way as to exploit all these mega trends -- while counting on IBM as the one throat to choke as the means to reduce change risk -- what happens?

Well, they might see total IT operating costs go down by 40% over a few years, while also able to enjoy the productivity benefits of SOA, SaaS, services ecologies like Salesforce.com, and therefore become more agile in how they acquire and adjust their business processes and services delivery. You might get to do more for a lot less. And a lot less IT labor.

And so our Blue Cloud-user corporation has their competitors who will, no doubt, need to follow a similar course, lest they be set on a path of grave disadvantage due to higher costs and an inability to change as quickly in their markets. If a mere 50 of the global 500 move to a Blue Cloud or equivalent, it would be enough to change the game in their respective industries. We're seen it happen in financial services, retail, music and media, and IT itself.

And so large enterprises will need not just make decisions about technology platform, supplier, and computing models. They will need to make bigger decisions based on broad partnerships that produce services ecologies in niches and industries. For an enterprise to adopt a Blue Cloud approach is not just to pick a vendor -- they are picking much more. The businesses and services and hosting all become mingled. It becomes more about revenue sharing than just a supplier contract.

Yes, Blue Cloud and many other announcements and alignments in 2007 point to a 2008 in which a services ecology evolves and matures for many industries. The place where differentiation matters most is at the intercept of proper embrace of the service model, of picking the right partners, and of exerting leadership and dominance of best practices within a business vertical or niche. You'll have a different relationship with your services partner than you do with your IT vendor. IBM will show you the way.

Hear the music? It ain't the Blues! It's the quick-step. Dancers, pick your partners carefully. You're going to be spending a lot of time sharing your futures together.

Wednesday, November 14, 2007

BriefingsDirect SOA Insights analysts examine 'Microsoft-Oriented Architecture' and evaluate SOA's role in 'Green IT'

Listen to the podcast. Or read a full transcript.

The latest BriefingsDirect SOA Insights Edition, Vol. 27, provides a roundtable discussion and dissection of Services Oriented Architecture (SOA)-related news and events with a panel of IT analysts and experts.

Please join noted IT industry analysts and experts Jim Kobielus, principal analyst at Current Analysis; Neil Macehiter, principal analyst at Macehiter Ward-Dutton; and Joe McKendrick, an independent analyst and blogger, for our most recent discussion, which is hosted and moderated by myself, Dana Gardner.

In this episode, recorded Oct. 26, our group examines the recent Microsoft SOA & Business Process Conference. The debate centers on whether the news around the pending Oslo approach amounts to support for SOA or Microsoft-Oriented Architecture (MOA) instead.

[UPDATE: Todd Biske weighs in on the topic.]

Is this yet another elevation of the COM/DCOM wars, or is Microsoft moving to a federated modeling of business process value, one that may leapfrogs other SOA vendors products and methods? Or, perhaps Microsoft is seeking to both steer SOA adopters to its platforms while also offering an inclusive business process modeling approach? Look for the answers in this discussion.

What's more, the analysts also evaluate SOA's role in Green IT. Does SOA beget better energy and resources use, or does better energy conservation in IT inevitably grease the skids toward greater SOA adoption -- or both? Learn more about how ROI and Green IT align with SOA patterns and adoption.

Here are some highlights and excerpts:
On SOA and Microsoft's Oslo ...

The SOA universe is heading toward a model-driven paradigm for distributed service development in orchestration, and that’s been clear for a several years now. What Microsoft has discussed this week at its SOA and BPM conference was nothing radically new for the industry or for Microsoft. Over time, with Visual Studio and the .NET environment, they've been increasingly moving toward a more purely visual paradigm.

Looking at the news this week from Microsoft on the so-called Oslo initiative, they are going to be enhancing a variety of their Visual Studio, BizTalk Server, BizTalk Services, and Microsoft System Center, bringing together the various metadata repositories underlying those products to enable a greater model-driven approach to distributed development.

I was thinking, okay, that’s great, Microsoft, I have no problem with your model-driven approach. You're two, three, or four years behind the curve in terms of getting religion. That’s okay. It’s still taking a while for the industry to completely mobilize around this.

In order words, rather than developing applications, they develop business models and technology models to varying degrees of depth and then use those models to automatically generate the appropriate code and build the appropriate sources. That’s a given. One thing that confuses me, puzzles me, or maybe just dismays me about Microsoft’s announcement is that there isn't any footprint here for the actual standards that have been developed like OMG’s unified modeling language (UML), for example.

... So, it really is a Microsoft Oriented Architecture. They're building proprietary interfaces. I thought they were pretty much behind open standards. Now, unless it’s actually 2003, I have to go and check my calendar.

I don’t see this as exclusively Microsoft-oriented, by any stretch. ... There are a couple of elements to the strategy that Microsoft’s outlined that differentiate it from the model-driven approaches of the past. The first is that they are actually encompassing management into this modeling framework, and they're planning to support some standards around things like the Service Modeling Language (SML), which will allow the transition from development through to operations. So, this is actually about the model-driven life cycle.

The second element where I see some difference is that Microsoft is trying to extend this common model across software that resides on premises and software that resides in the cloud somewhere with services. So, it has a common framework for delivering, as Microsoft refers to it, software plus services. In terms of the standard support with respect to UML, Microsoft has always been lukewarm about UML.

A few years ago, they were talking about using domain specific language (DSL), which underpin elements of Visual Studio that currently exist, as a way of supporting different modeling paradigms. What we will see is the resurgence of DSL as a means of enabling different modeling approaches to be applied here. ... Microsoft is really trying to drive this is around a repository for models, for an SML model or for the models developed in Visual Studio.

This smacks of being a very ambitious strategy from Microsoft, which is trying to pull together threads from different elements of the overall IT environment. You've got elements of infrastructure as a service, with things like the BizTalk Services, which has been the domain of large Web platforms. You've got this notion of computer applications in BPM which is something people like IBM, BEA, Software AG, etc. have been promoting.

Microsoft has got a broad vision. We also mustn’t forget that what underpins this is the vision to have this execution framework for models. The models will actually be executed within the .NET framework in the future iteration. That will be based on the Window’s Communication Foundation, which itself sits on top of the WS-* standards ... .

So that ambitious vision it still some way off, as you mentioned -- beta in 2008, production in 2009. Microsoft is going to have to bring its ISVs and systems integrator community along to really turn this from being an architecture that's oriented toward Microsoft to something broader.

Clearly, they had to go beyond UML in terms of a modeling language, as you said, because UML doesn’t have the constructs to do deployment and management of distributed services and so forth. I understand that. What disturbs me right now about what Microsoft is doing is that if you look at the last few years, Microsoft has gotten a lot better when they are ahead of standards. When they're innovating in advance of any standards, they have done a better job of catalyzing a community of partners to build public specs. ... I'd like to see it do the same thing now in the realm of modeling.


On Green IT and SOA's Impact on Energy Use in IT ...

Green IT was named number one in a top-ten strategic technology areas for 2008 by Gartner Group. How does SOA impact this?

The whole notion of SOA is based on abstraction, service contracts, and decoupling of the external calling interfaces from the internal implementations of various services. Green smashes through that entire paradigm, because Green is about as concrete as you get.

SOA is the whole notion of consolidation -- consolidation of application logic, consolidation of servers, and consolidation of datacenters. In other words, it essentially reduces the physical footprint of the services and applications that we deploy out to the mesh or the fabric.

SOA focuses on maximizing the sharing, reuse, and interoperability of distributed services or resources, application logic, or data across distributed fabrics. When they're designing SOA applications, developers aren't necessarily incentivized, or even have the inclination, to think in terms of the ramifications at the physical layer of these services they're designing and deploying, but Green is all about the physical layer.

In other words, Green is all about how do human beings, as a species, make wise use and stewardship of the earth’s nonrenewable, irreplaceable resources, energy or energy supplies, fossil fuels, and so forth. But also it’s larger than that, obviously. How do we maintain a sustainable culture and existence on this planet in terms of wise use of the other material resources like minerals and the soil etc.?

Over time, if SOA is successful other centers of development or other deployed instances of code that do similar things will be decommissioned to enable maximum reuse of the best-of-breed order-processing technology that’s out there. As enterprises realize the ROI, the reuse and sharing should naturally lead to greater consolidation at all levels, including in the datacenter. Basically, reducing the footprint of SOA on the physical environment is what consolidation is all about.

Another trend in the market is the SaaS approach, where we might acquire more types of services, perhaps on a granular level or wholesale level from Google, Salesforce, Amazon, or Microsoft, in which case they are running their datacenters. We have to assume, because they're on a subscription basis for their economics, that they are going to be highly motivated toward high-utilization, high-efficiency, low-footprint, low-energy consumption. That will ultimately help the planet, as well, because we wouldn’t have umpteen datacenters in every single company of more than a 150 people.

Maybe we're looking at this the wrong way. Maybe we’ve got it backwards. Maybe SOA, in some way, aids and abets Green activities. Maybe it's Green activities, as they consolidate, unify, seek high utilization, and storage that aid and abet SOA. ... Green initiatives are going to direct companies in the way that they deploy and use technology toward a situation where they can better avail themselves of SOA principles.

The issue is not so much reducing IT’s footprint on the environment. It’s reducing our species' overall footprint on the resources. One thing to consider is whether we have more energy-efficient datacenters. Another thing to consider is that, as more functionality gets pushed out to the periphery in terms of PCs and departmental servers, the vast majority of the IT is completely outside the [enterprise] datacenter.

I'm going to be a cynic and am just going to guess that large, Global 2000 corporations are going to be motivated more by economics than altruism when it comes to the environment. ... As we discussed earlier, the Green approach to IT might actually augment SOA, because I don’t think SOA leads to Green, but many of the things you do for Green will help people recognize higher value from SOA types of activities.
Listen to the podcast. Or read a full transcript.

Monday, November 12, 2007

IBM scoops up BI leader Cognos in $5B cash bid

The thought on the street was that Cognos had to get bought soon, given the business intelligence (BI) consolidation land-grab of late -- punctuated by Oracle's acquisition of Hyperion and SAP's buy of Business Objects.

So now Big Blue steps up to the plate, and for $5 billion in cash, buys Cognos. This quite large acquisition for IBM quickly adds more BI-oomph to the IBM "Information" portfolio, but also importantly takes Cognos off the market from anyone else. Other suitors would probably have been Microsoft and perhaps HP. This BI value could have burnished HP's total managment drive and complemented the Opsware purchase.

Publicly held Cognos, of Ottawa, Canada will become part of IBM’s Information Management software and should well augment IBM's aggressive Information on Demand initiatives through new BI and Performance Management capabilities. The Cognos assimilation will be led by managed by Information Management General Manager Ambuj Goyal.

It will be interesting to see how IBM will support all the Cognos partnership deals with many vendors, ISVs, channel players, SIs, and users. For example, Cognos just joined a partnership with Software AG, which competes with IBM on several levels.

Despite the complications of how to best merge the Cognos ecology into the IBM arsenal/universe, the purchase shows the importance of insight into and improved management of business activities to the global enterprise leadership. IBM has put a premium on ramping up its Information on Demand values through rapid acquisitions and business development.

Just this year, IBM has bought (or is in the process of buying) Watchfire, Telelogic, DataMirror, WebDialogs, and Princeton Softech.

Helping huge and complex corporations to get a handle on their data, content, metadata, and digital assets -- as well as to refine, consolidate and automate access to said assets -- forms a needed foundation for IBM's strategies around services oriented archirecture (SOA) and business process managment (BPM). Providing end-to-end, top-to-bottom value in the data lifecycle also buttresses IBM's goal of easing the customization of and ongoing agility of business applications and processes, even into granular vertical business niches. And all of these values further empower IBM's professional services offerings and depth.

Indeed, IBM has wasted no time nor expense in cobbling together perhaps the global leadership position in data management in the most comprehensive sense. IT vendor competition has long centered on entrenchment via platform, development framework, proprietary technologies, and price-performance persuasion. Long-term advantage via best solutions for complete data lifecycle management and mastery has additional relevance in a market where virtualization, SaaS, SOA, and open source are dislodging the old-school vendor lock-in options.

Sunday, November 11, 2007

Software AG and Cognos bring BI and BPM into common orbit

The much-discussed marriage of business intelligence (BI) and business process management (BPM) may be a step closer to the altar with last week's announcement by Software AG that it will embed Cognos 8 BI with the webMethods product suite.

Software AG, which made the announcement at Integration World 2007 in Orlando, Fla., says the strategic partnership and OEM licensing agreement will allow companies to combine BI with BPM and business activity monitoring, providing real-time and historical data on a single dashboard for actionable insight. The new out-of-the-box component will let users:

  • Streamline change management, because requirements and implications of proposed changes will be illustrated before implementation.

  • Accelerate process improvements by drilling down on operational data.

  • Enhance business agility through more rapid implementation of operational changes.

  • Achieve closer alignment with line-of-business objectives due to using the same platform for business planning and performance monitoring.

  • Improve accountability through the embedded use of scorecarding and analytics.

Pundits and analysts have been talking about the merger of BI and BPM for a long time, and the talk heated up with TIBCO's acquisition of Spotfire last May, but all that talk has led to a lot of dating, but no commitment.

Peter Kürpick, president and chief product officer for the webMethods division of Software AG referred to the all the talk in making the announcement. "Many talk about delivering an integrated product suite and a seamless user experience, but few actually deliver. The inclusion of best-in-class BI and reporting is one key element. Others include a shared metadata model and lifecycle governance for all assets, real-time monitoring, and process-based collaboration."

Tony Baer at CBR Online sees this as a pre-emptive strike by Software AG in a market where the big players are lining up their BI assets:


"With rivals such as IBM and Oracle also having collected BI assets as part of their greater software platforms, which also include BPM and BAM, Software AG's tie-in with Cognos (for now, the last major independent BI vendor, unless you're counting Information Builders) was an important pre-emptive move."

Current customers can add Cognos BI as a supported feature immediately.

In other news from Integration World, Software AG has opened the door for bringing rich Internet applications (RIAs) to enterprise transaction systems with the introduction of Natural for Ajax, an enhanced version of the company's Natural 2006 application development and deployment environment.

Natural 2006 allows developers to create highly scalable enterprise transactional systems running on either mainframe or open source platforms. Natural for Ajax follows close on the heels of Software AG's release of Natural for Eclipse. Key benefits of RIAs include the streamlined ability to create composite views of application and data, as well as the availability of more dynamic, high-performance and interactive reporting.

According to Software AG, Natural for Ajax can be used to create browser-based, rich user interfaces for enterprise applications and mainframe systems that rival the look, feel and performance of the latest Web 2.0 applications. Developers can implement rich-client functionality using a library of more than 50 pre-defined Web graphical user interface (GUI) controls. Other interactive features -- such as “drag and drop,” context menus and advanced grid processing -- can be used within a standard Web browser to streamline development and boost productivity.

Among the other announcements:

  • Software AG will offer and support Layer 7's SecureSpan SOA security and policy enforcement solutions on a global basis. Layer 7 provides gateway software and appliances for securing, scaling, and simplifying production SOAs. The Layer 7 product will also serve as a fully interoperable policy enforcement point (PEP) for services government by CentraSite, a SOA governance solution developed jointly by Software AG and Fujitsu.

  • The CentraSite community, which brings together partners who are developing solutions that interoperate with CentraSite, has grown to over 50 members. A standards-based organization, the CentraSite Community now includes such members as Progress Software, MID, BAP Solutions, JustSystems, Composite Software, Intalio, IONA, iTKO, Solstice Software, SOA Software, and SymphonySoft.

  • Software AG and Satyam Computer Systems Ltd., announced they will expand their global partnership for developing vertical solutions using WebMethods. This partnership focuses on industry-specific process frameworks for such key sectors as insurance, manufacturing, and telecom.

Friday, November 9, 2007

Looks like the The Gang, rounded up by Steve Gillmor, is back in the saddle

Jason Calacanis is blogging about the latest debut of The Gang, aka Gillmor Group, aka Bad Sinatra, aka Gillmor Gang. The first episode is on Facebook, in four parts. I was happy to be a part of this, nearly a year since the last real Gang recording.

It actually came out quite good, just like the olden days. And a critical mass of the original gang is on the call: Steve Gillmor, Nick Carr, Mike Arrington, Doc Searls, Robert Anderson, Jason Calacanis, Mike Vizard, and yours truly. Expect more.

At least this first weekly and lively discourse on the really important things in life is not in 18 revolting segmentations, as was the norm in some past iterations. I can only surmise that Steve is out hustling up some underwriters for the podcast. Why else break it up at all?

Anyone care to cut a check on this? Six figures? Jonathan? I'm sure Steve's voice-overs on your introductions will be inspiring. ... ("He'd never be in blogging if it weren't for me!") Actually, I'd probably not be in blogging if not for Steve either. Thanks, pal.

True to his attention-deficit marketing mentality, there is virtually no promotion of the new The Gang. Links are dead after all. It's all about negative gestures, don't ya know. Ya, and I buried Paul, you expert textpert.

I'm very glad to see that Steve is producing this independently. No more Pod.*. And Facebook will make a fascinating viral platform. It's good to experiment. Just open enough. He might even be able to measure the audience; might even be able to define the audience members, might even be able to invite the audience individually. Ah, the good old days of controlled circulation ... much better rates that way. And the list -- My God, he could sell the list! Elitism has its advantages.

And I'm glad it's not video either, leave that to the infomercials. See Gate, et al. Voice is plenty. Just repurpose it on iTunes and monetize on the Facebook picket-fence garden. Screw the rest of 'em.

And so, how do you post "music" to Facebook? Is that an application, or a feature?