Friday, June 13, 2008

OpenSpan to ease client/server modernization by ushering apps from desktop to Web server

Promising lower costs and greater control, OpenSpan, Inc., this week unveiled its OpenSpan Platform Enterprise Edition 4.0, which will allow organizations to move both legacy and desktop applications off the desktop and onto the server. This will allow them to be integrated with each other or rich Internet applications (RIAs) and expressed as Web services.

Key to the new offering is the company's Virtual Broker technology to enable the movement of the applications, allowing companies to rapidly consume Web services within legacy applications or business process automations that span applications. Companies can also expose selective portions of applications over the Web.

According to OpenSpan, of Alpharetta, Ga., the benefits of their approach include lower costs, because companies will have to license fewer copies of software, as well as giving IT greater control over end-user computing by centralizing application management on the server.

Moving applications off the desktop and onto the server means that companies no longer have to install and expensively maintain discrete copies of each application on every desktop. Users access only the application portion they need. This has the added benefit of reducing desktop complexity.

Yep, there are still plenty of companies and apps out there making their journey from the 1980s to the 1990s -- hey, better late than never. If you have any DOS apps still running, however, I have some land in Florida to sell you.

OpenSpan made a splash last year, when it announced its OpenSpan Studio, which allowed companies to integrate siloed apps. At the time, I explained that process:

How OpenSpan works is that it identifies the objects that interact with the operating system in any program — whether a Windows app, a Web page, a Java application, or a legacy green screen program — exposes those objects and normalizes them, effectively breaking down the walls between applications.

The OpenSpan Studio provides a graphical interface in which users can view programs, interrogate applications and expose the underlying objects. Once the objects are exposed, users can build automations between and among the various programs and apply logic to control the results.

OpenSpan Platform Enterprise Edition 4.0 will be available this year.

Wednesday, June 11, 2008

Live TIBCO panel examines role and impact of service performance management in enterprise SOA deployments

Listen to the podcast. Read a full transcript. Sponsor: TIBCO Software.

Myriad unpredictable demands are being placed on enterprise application services as Service Oriented Architecture (SOA) grows in use. How will the far-flung deployment infrastructure adapt and how will all the disparate components perform so that complex business services meet their expectations in the real world?

These are the questions put to a live panel of analysts and experts at the recent TIBCO User Conference (TUCON) 2008 in San Francisco. Users such as Allstate Insurance Co. are looking for SOA performance insurance, for thinking through how composite business services will perform and to ensure that these complex services will meet and exceed expected service level agreements.

At the TUCON event, TIBCO unveiled a series of products and services that target service performance management, and that leverage the insights that managed complex events processing (CEP) provides. To help understand how complex events processing and service performance management find common ground -- to help provide a new level of insurance against failure for SOA and for enterprise IT architects -- we asked the experts.

Listen to the podcast, recorded live at TUCON 2008, with Joe McKendrick, an independent analyst and SOA blogger; Sandy Rogers, the program director for SOA, Web services and integration at IDC; Anthony Abbattista, the vice president of enterprise technology strategy and planning for Allstate Insurance Co., and Rourke McNamara, director of product marketing for TIBCO Software. I was the producer and moderated.

Here are some excerpts:
We are describing what could be thought of as insurance. You’ve already gone on the journey of SOA. It’s like going on a plane ride. Are you going to spend the extra few dollars and get insurance? And wouldn't you want to do that before you get into the plane, rather than afterward? Is that how you look at this? Is service performance management insurance for SOA?

It’s interesting to think of [SOA service performance management] as insurance. I think it’s a necessary operational device, for lack of better words. ... I don’t think it’s an option, because what will hurt if you fall down has been proven over and over again. As the guy who has to run an SOA now -- it’s not an option not to do it.

I actually do look at service performance management as insurance -- but along the lines of medical insurance. Anthony said people fall down and people get hurt. You want to have medical insurance. It shouldn't be something that is optional. It shouldn't be something you consider optional.

It’s something that you need to have, and something that people should look at from the beginning when they go on this SOA journey. But it is insurance, Dana. That’s exactly what it does. It prevents you from running into problems. You could theoretically go down this SOA path, build out your services, deploy them, and just get lucky. Nothing will ever happen. But how many go through life without ever needing to see a doctor?

What we are seeing is that, as services are exposed externally to customers, partners, and other systems, it affects the ability to fail-over, to have redundant services deployed out, to be able to track the trends, and be able to plan, going forward, what needs to be supported in the infrastructure, and to even go back to issues of funding. How are you going to prove what's being used by whom to understand what's happening?

So, first, yes, it is visibility. But, from there, it has to be about receiving the information as it is happening, and to be able to adjust the behavior of the services and the behavior of the infrastructure that is supporting. It starts to become very important. There are levels of importance in criticality with different services in the infrastructure that’s supporting it right now.

But, the way that we want to move to being able to deploy anywhere and leverage virtualization technologies is to break away from the static configuration of the hardware, to the databases, to where all this is being stored now, and to have more of that dynamic resourcing. To leverage services that are deployed external to an organization you need to have more real-time communication.

With something like a Tivoli or a BMC solution, something like a business service management technology, your operational administrators are monitoring your infrastructure.

They are monitoring the application at the application layer and they understand, based on those things, when some thing is wrong. The problem is that’s the wrong level of granularity to automatically fix problems. And it’s the wrong level of granularity to know where to point that finger, to know whom to call to resolve the problem.

It’s right, if what's wrong is a piece of your infrastructure or an entire application. But if it’s a service that’s causing the problem, you need to understand which service -- and those products and that sort of technology won’t do that for you. So, the level of granularity required is at the service level. That’s really where you need to look.

A lot of the initiatives around ITIL Version 3.0 are starting to get some of those teams thinking in terms of how to associate the business requirements for how services are being supported by the infrastructure, and how they are supported by the utility of the team itself. But, we're a long way away from having everything all lined up, and then having it automatically amend itself. People are very nervous about relinquishing control to an automated system.

So, it is going to be step-by-step, and the first step is getting that familiarity, getting those integrations starting to happen and then starting to let loose.

We're dealing with organizations like Allstate, which have massive size and scale, with 750 services. What do people need to be considering, as we moving into to yet more complexity with virtualization, cloud computing, utility grids?

You need to make sure that, as you move from the older ways of doing things -- from the siloed applications, the siloed business unit way of doing things -- to the SOA, services-based way of doing things, you don’t ignore the new complexities you are introducing.

Don’t ignore the new problems that you are introducing. Have a strategy in place to mitigate those issues. Make sure you address that, so that you really do get the advantage, the benefits of SOA.

What I mean by that is with SOA you are reusing services. You are making services available, so that that functionality, that code, doesn’t need to be rewritten time and time again. In doing so you reduce the amount of work, you reduce the cost of building new applications, of building new functionality for your business organization.

You increase agility, because you have reduced the amount of time it takes to build new functionality for your business organization. But, in so doing, you have taken what was one large application, or three large applications, and you have broken them down into dozens or tens of separate smaller units that all need to intercommunicate, play nice with each other, and talk the same language.

Even once you have that in production, you now have a greater possibility for finger-pointing, because, if the business functionality goes down, you can’t say that that application that we just put on is down.

The big question now is what part of that application is down? Whose service is it? Your service, or someone else’s service? Is it the actual servers that support that? Is it the infrastructure that supports that? If you are using virtualization technology, is it the hardware that’s down, or is it the virtualization layer? Is it the software that runs on top of that?

You have this added complexity, and you need to make sure that doesn’t prevent you from seeing the real benefit of doing SOA.
Listen to the podcast. Read a full transcript. Sponsor: TIBCO Software.

Monday, June 9, 2008

Serena's Mashup Composer ushers content and widgets to on-demand business mashups

Acting as a mashup matchmaker, Serena Software is bringing together content -- widgets, RSS feeds, and Flash components -- with enterprise data for on-demand business mashups, giving non-technical users access to powerful customized applications without burdening IT departments.

On Tuesday, June 10, Serena will announce the upcoming major iteration of the Redwood City, Calif. company's Mashup Composer service, which allows users to drag and drop a wide variety of consumer information and combine it with data from internal applications -- such as in salesforce.com, Siebel, and Oracle -- to create rich Internet mashups (RIMs).

Users will be able to leverage any kind of widget or rich Internet application including Adobe Flash, Amazon search, Flickr, Microsoft Silverlight, RSS feeds, YouTube, any of the 30,000 Google gadgets, LinkedIn or Facebook profiles, or external newsfeeds. That's a lot of stuff, and there will soon be even more, especially the fruits of the fast-charging social networking.

Serena explains how this works:

"Imagine a scenario where a sales rep is preparing for a big meeting with a new customer. The rep might start with the customer’s record in salesforce.com, and have the mashup fetch related information like a photo and details from the customer’s Linked In or Facebook profile, external news feeds showing the company’s latest stock price, credit report information from a Dunn & Bradstreet Web service, and widgets showing local weather and traffic in the customer’s location. Soon the rep has all the information needed for the meeting. It’s as easy as personalizing a Yahoo! home page."

While some IT folks may worry that putting this functionality in the hands of non-technical people, Serena says they have that worry covered, saying that they povide a "proven governance framework that provides the reliability, security, and compliance that IT requires."

I wrote about this issue last August when I blogged on Serena and what was then its upcoming "Project Vail:"

"The trick is how to allow non-developers to mashup business services and processes, but also make such activities ultimately okay with IT. Can there be a rogue services development and deployment ecology inside enterprises that IT can live with? How can we ignite 'innovation without permission' but not burn the house down?

"Serena believes they can define and maintain such balances, and offer business process mashups via purely visual tools either on-premises or in the cloud."

The new functionality in the Mashup Composer will be available free of charge as part of Serena's on-demand release in the third quarter. Word has it that the tool will be free, and that pricing will follow the cloud model, based on infrastructure use over time.

The Serena model augers well for my earlier comments on the power and need for WOA. Again, I'm not locked into the WOA nomenclature, but the goal of spurring on SOA use and methods via energizing users with Web content remains.

Serena defines its Mashup Composer process one that enables "business mashups." I like the imagery that connotes. I'd take it a step further and join it with my WOA value comments, such that business mashups are a catalyst to broader SOA use and adoption while also extending SOA value into the managed cloud.

Consider the power of combining and leveraging the best of SOA, the best of on-demand business mashups, and the powerful insights on users and their communities as defined by the social graph information now available from the social networks.

Effectively bringing together business assets, open web content and defined social relations will offer something quite new and very productive over the next few years. Those companies that jump on this early and master it will develop a broad advantage.

Thursday, June 5, 2008

Apache CXF: What the future holds for Web services frameworks and dynamic languages

Listen to the podcast. Read a full transcript. Sponsor: IONA Technologies.

More open source server components and frameworks continue to emerge from developer communities. One of the latest, Apache CXF, an open-source Web services framework, graduated from incubation recently to become a full Apache Foundation project.

The progeny of the previous merger of the ObjectWeb-managed Celtix project and the XFire Project at Codehaus, CXF joins a growing pool of Apache and other open source projects supporting services oriented architect (SOA) infrastructure. Many, like CXF, also enjoy commercial support and associated commercial products, such as IONA Technologies' FUSE.

CXF is on the cusp of broadening beyond conventional Web services, however, as users seek to align the framework with JavaScript, and perhaps more dynamic programming languages, such as Groovy and Ruby. Interoperability is the goal, with both backward and forward messaging compatibility, with an expanding set of technologies supported. Community-based open source development is adept at adding such breadth and depth to the benefit of all users, and CXF is no exception.

To learn more about CXF and the direction for SOA, middleware and open source development, I recently spoke with Dan Kulp, a principal engineer at IONA who has been deeply involved with CXF; Raven Zachary, the open-source research director at The 451 Group, and Benson Margulies, the CTO of Basis Technology.

Here are some excerpts:
If you are doing any type of SOA stuff, you really need some sort of Web-service stack. There are applications written for ServiceMix and JBI that don't do any type of SOAP calls or anything like that, but those are becoming fewer and farther between. Part of what our Web services bring is the ability to go outside of your little container and talk to other services that are available, or even within your company or maybe with a business partner or something like that.

The whole Apache model is mix and match, when you are talking about not only a licensing scheme. The Apache license, is a little easier for commercial vendors to digest, modify, and add in, compared to the GPL, but also I think it's the inherent nature of the underlying infrastructure technologies.

When you deploy an application, especially using open source, it tends to be several dozen distinct components that are being deployed. This is especially true in Java apps, where you have a lot of components or frameworks that are bundled into an application. So, you would certainly see CXF being deployed alongside of other technologies to make that work. Things like ServiceMix or Camel, as you mentioned, ActiveMQ, Tomcat, certainly Apache Web Server, these sorts of technologies, are the instrument to which these services are exposed.

A lot of these projects, like Camel and ServiceMix, require some sort of Web-services stack, and they've basically come to CXF as a very easy-to-use and very embeddable service stack that they are using to meet their Web-services needs. ... One of CXF's advantages is what you want to do is deliver to some third-party a stack that they put up containing your stuff that interacts with all of their existing stuff in a nice light-weight fashion. CXF is non-intrusive in that regard.

CXF gets a lot of attention because it is a full open-source framework, which is completely committed to standards. It gives easy-to-use, relatively speaking, support for them and, as in many other areas, focuses on what the people in the outside world seem to want to use the kit for -- as opposed to some particular theoretical idea ... about what to use it for.

Apache CXF has is a fairly different approach of making the code-first aspect primary. ... So, a lot of these more junior-level developers can pick up and start working with Web services very quickly and very easily, without having to learn a lot of these more technical details.

It's actually kind of fascinating, and one of the neatest things about working in an open-source project is seeing where it pops up. ... One of the examples of that is Groovy Web service. Groovy is another dynamic language built in Java that allows you to do dynamic things. I'm not a big Groovy user, but they actually had some requirements to be able to use Groovy to talk to some Web services, and they immediately started working with CXF.

They liked what they saw, and they hit a few bugs, which was expected, but they contributed back to CXF community. I kept getting bug reports from people, but was wondering what they were doing. It turns out that Groovy's Web-services stack is now based on CXF. That's type of thing is very fascinating from my standpoint, just to see that that type of stuff developed.

Apache CXF 2.1 was released about a week after we graduated, and it brought forth a whole bunch of new functionality. The JavaScript was one of them. A whole new tooling was another thing, also CORBA binding, and there is a whole bunch of new stuff, some REST-based APIs. So, 2.1 was a major step forward.

Now that they we're graduated, there are a lot more people looking at it, which is good. We're getting a lot more input from users. There are a lot of people submitting other ideas. So, there is a certain track of people just trying to get some bug fixes and getting some support in place for those other people.

I like the fact that in CXF they are looking at a variety of protocols. It's not just one implementation of Web services. There's SOAP, REST, CORBA, other technologies, and then a number of transports, not just HTTP. The fact is that when you talk to enterprises, there's not a one-size-fits-all implementation for Web services. You need to really look at services, exposing them through a variety of technologies.

I like that approach. It really matches the needs of a larger variety of enterprise organizations and just it's a specific technology implementation of Web services. I mean, that's the approach that you're going to see from open-source projects in the space. The ones that provide the greatest diversity of protocols and transports are going to do quite well.

There's a whole bunch of other ideas that we're working on and fleshing out. The code first stuff that I mentioned earlier, we have a bunch of other ideas about how to make code-first even better.

There are certain tool kits that you kind of have to delve down into either configuration or WSDL documents to accomplish what you want. It would be nice if you could just embed some annotations on your code, or something like that, to accomplish some of that stuff. We're going to be moving some of those ideas forward.

There's also a whole bunch of Web-services standards such as WS-I and WS-SecureConversation that we don't support today, but we are going to be working on to make sure that they are supported.

When you look at growth opportunities, back in 2001, JBoss app server was a single-digit market share, compared to the leading technologies at the time, WebSphere from IBM and WebLogic from BEA. In the course of four years, that technology went from single-digit market share to actually being the number one deployed Java app server in the market. I think it doesn't take much time for a technology like CXF to capture the market opportunity.

So, watch this space. I think this technology and other technologies like it, have a very bright future.
Listen to the podcast. Read a full transcript. Sponsor: IONA Technologies.

Wednesday, June 4, 2008

JustSystems moves dynamic document management deeper into enterprise

Structured authoring -- it's not just for technical documents any more. JustSystems today announced XMetaL for Enterprise Content Management (ECM), which integrates with more than 20 commercial repositories and file systems.

This new offering provides seamless integration to all leading content management systems, including repositories from IBM FileNet, EMC Documentum, OpenText, Interwoven, and Microsoft. [Disclosure: JustSystems is a sponsor of BriefingsDirect podcasts.]

JustSystems has also announced an original equipment manufacturer (OEM) agreement with IBM, under which the company will embed and resell IBM WebSphere Information Integrator Content Edition (IICE) with the new XMetaL product. This is designed to allow companies to broaden XMetaL deployments and to leverage repositories they're currently using to store and manage content.

According to JustSystems, XMetaL for ECM will allow companies to start using structured authoring, no matter which repositories are already in place. Companies will also be able to deploy it across departments without disrupting current content management, as well as integrate and automate content creation and publishing across repositories.

Structured documents can be a valuable ally of service-oriented architecture (SOA) by providing data to workers in the document formats to which they are accustomed, and, at the same time, allowing them to focus on authoritative data and content, while eliminating the drudgery of validating and reconciling documents.

I recently wrote a white paper on the role of structured authoring, dynamic documents, and their connection to SOA. Read the whole paper here.

Back in April, I recorded a podcast with Jake Sorofman, senior vice president of marketing and business development, for JustSystems North America. The sponsored podcast described the tactical benefits of recognizing the dynamic nature of documents, while identifying the strategic value of exposing documents and making them accessible through applications and composite services viaSOA.

In the podcast, Sorofman explained the value of structured authoring in the enterprise:

"There are really a couple of different issues at work here. The first is the complexity of a document makes it very difficult to keep it up to date. It’s drawing from many different sources of record, both structured and unstructured, and the problem is that when one of the data elements changes, the whole document needs to be republished. You simply can’t keep it up-to-date.

"This notion of the dynamic documents ensures that what you’re presenting is always an authoritative reflection of the latest version of the truth within the enterprise. You never run the risk of introducing inaccurate, out of date, or stale information to field base personnel."

You can listen to the podcast here or read a full transcript here.

Tuesday, June 3, 2008

Spike in enterprise 'events' spurs debut of Event Processing Technical Society

The recent growth -- and expected spike -- in business event data in enterprises has led a group of IT industry leaders to form the Event Processing Technical Society (EPTS), designed to encourage adoption and effective use of event processing methods and technology in applications.

Among the founding members are such heavy hitters as IBM, Oracle, TIBCO Software, Inc., Gartner Research, Coral8 Inc., Progress Software, and StreamBase.

Event processing pioneer Dr. David Luckham, a founding member of EPTS, explained in a press release:

“We've had decades of development of event processing technology for simulation systems, networking, and operations management. Now, the explosion in the amount of business event data being generated in modern enterprises demands a new event processing technology foundation for business intelligence and enterprise management applications.”

EPTS has five initial goals:
  • Document usage scenarios where event processing brings business benefit

  • Develop a common event-processing glossary for its members and the community-at-large to use when dealing with event processing

  • Accelerate the development and dissemination of best practices for event processing

  • Encourage academic research to help establish event processing as a research discipline and encourage the funding of applied research

  • Work with existing standards development organizations such as Object Management Group (OMG), OASIS and W3C to assist in developing standards in the areas of: event formats, event processing interoperability, event processing (meta) modeling and (meta) languages.
EPTS, which does not plan to develop standards itself, has already begun work on an initial draft of the proposed glossary. A use-case work group is generating templates around documentation and presentation of the use cases.

Event processing was a hot topic at the recent TIBCO user conference, TUCON. (Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.)

Fellow ZDNet blogger Joe McKendrick has some thoughts on event processing, too.

The new consortium plans three additional work groups. The first will focus on developing information on event processing architecture. Another will identify requirements for the interoperability among event processing applications and platforms. The third will collaborate with the academic community to develop courses in this area.

The advance in the scale and complexity of streams of events will place a greater burden on infrastructure and architects. But the ability to manage and harvest analysis from these events could be extremely powerful, and provide a lasting differentiator for expert practitioners.

While the processing of such events has its roots in financial companies and transactions, the engine for dealing with such throughputs and variable paths will find uses in many places. The vaulting commerce expected as always-on mobile Web, GPS location and social graph data collide is a prime example.

We hit on these types of transactions as the progeny of online advertising in a recent BriefingsDirect Analyst Insights roundtable podcast.

Consumers and end users should begin to enjoy what they may well perceive as "intelligent" services -- based on the fruits of complex events processing -- from their devices and providers. Harvesting and using more data from sensors and device meshes will also require the scale that event processing requires.

We should also chalk this up to yet another facet of the growing definition of cloud computing, as event processing as a service within a larger set of cloud-based services will also build out in the coming years. The whole trend of event processing bears close monitoring.

EPTS will hold its next meeting Sept, 17-19 in Stamford, Conn. More information on the consortium can be found at the EPTS Web site.

Friday, May 23, 2008

Microsoft opens philosophical can of worms with Live Search Cashback

Talk is bubbling up across the blogosphere, Gillmor Gang and Techmeme daily about social graph personal information. This may be among the most important discussions and topics of our time. How the "social mesh" works out now will affect our lives and businesses for a long time. It may even impact how we define what "me" is online. We really need to get it right, ASAP.

Yet much of the talk focuses on technology, privacy, use rights and still loosely defined standard approaches to protecting user control over data. It's still murky about how the online social network services will own and control the user- and relationships-defining data inside of their social networks, including Twitter. But there's a larger set of issues that has to do with how we want technology and the Internet to affect us people, as a business, as a society, as a market of markets and as a species.

UPDATE: Many of these issues came up, especially toward the end, of Friday's Gillmor Gang with Google Director of Engineering David Glazer. One takeaway is that, ironically, Microsoft should be among Google Friend Connect's best friends.

The discussion on social graph data portability gets to a philosophical level quickly, because the ways we have codified our personal relationships to each other -- and to larger organizations or power centers -- over eons does not necessarily apply adequately to the new virtual boundaries. It's hard to know on the Web what defines the rights of the individual, the family, tribe, community, company, village, town, state, nation, civilization, race, or species. Do accepted and proven cultural patters offline fully translate into social patterns online?

The older established "contracts" -- from Codex Hammurabi to Magna Carta to Mayflower Compact to U.S. Constitution to the User Terms of Agreement -- do not seem to get the job fully done anymore. It's not clear what I am entitled to online, whereas I'm pretty sure I know what I'm entitled to offline, and I know what to do to enforce getting what I'm entitled to offline legally, ethically and politically.

In essence, we as online users and small businesses don't have any social-order contracts with the online providers, other than what their lawyers put in the small print when you "accept" their free or paid services. And, of course, they have made available their privacy policies for all to see. So there. Click away, users galore, while they store away the user data and relationships analytics.

As a person, you only retain the right not to click (as long as you pay throughout the two-year user subscription agreement, or suffer the penalty charge for leaving). If you're lucky you'll be able to take your phone number with you if you walk, but not necessarily your email address, or your contacts, your social interactions definitions. Most of the data about whatever you did while nestled in the rosy social bosom of their servers, remains with them unless the volunteer to let it be open. So far.

Without belaboring the implications on the metaphysical scale, my point is to show that how our online social interactions as currently defined and controlled place us into uncharted territory. And as with any social contracts, the implicit and explicit ramifications of where we find ourselves later on needs to taken very seriously.

We'll want the ability to back out, if the unforeseen future warrants it, without too much pain, with our open data in tact. We should all want escape clauses for what we do online the next several years, just to be safe. Who you gonna call if it's not fair?

If things don't go well for the user or individual business, what could be done? Because this is about the Web, there isn't a government to lobby, a religious doctrine to fall back on, a meta data justice code of conduct, nor an established global authority to take directives from. The older forms of social contract enforcement don't have a clue. There is only the User Terms of Agreement, the codex of our time. Read it and weep.

Because this is about the Web, the early adopters basically make it up as they go and hope for the best. It's been a great ride. The service providers try and keep up with the fast-changing use patterns, and then figure out a business model that has legs. They write up more User Terms of Agreement. Startups get funded based on their ability to get some skin in the game, even without a business model. They show the investors the User Terms of Agreement, and get their rounds. More work goes into the User Agreements than into the infrastructure to keep the thing working once the clicks come.

This laissez-faire attitude has worked pretty darn well for building out the Web as an industry, thankfully. But now we're talking about more than building out the no-holds-barred Web, we're talking about social contracts ... We're talking about what the user possesses from their role in building out the Web, in populating the social networks, the authoring of the blogosphere. Is there any social collective ownership or rights by the participants in the Web? Or is it only really -- in the final analysis -- owned those who control the means of production of the services?

There's the Web, and there's the blogosphere -- are they they same? What rights does the individual, the person, the blog entity have on the commercial Web? Does the offline me possess the same social powers online? I really don't know.

What's clear is that people like Mike Arrington, Marc Cantor, Steve Gillmor, Robert Scoble and Dave Winer (among many others) want as much freedom about what they do online as what Western Civilization has endowed on them and their ancestors offline. In some circles, and some of these people, want even more social power online than what has been the norm offline. More power to them.

There is a power clash a brewin'. The U.S. has long struggled over states rights versus federal rights. The individual has looked to both -- and pitted them against each other -- to define and protect individual rights.

But what about online? When push comes to shove, how does the individual rights assert themselves against what the services provider can perfectly legally assert? If the server farm says they own your online address book, they probably do legally (see the Use Terms). If they say they own the meta data from your click stream on their servers over the past three years, they probably do.

So far, user rights have been strictly voluntary on behalf of the providers. Some are built into agreements. The needed rising tide of online adoption patterns and essential need to generate traffic and clicks has protected users, to a point. Let's hope it continues. I hope voluntary is enough.

Folks, you should recognize that you already have a lot of power, given the fact that social networks are falling all over themselves to show how "open" they are. They fear that you can and will bolt, even if you lose some data (the first time). Data portability is recognized by the Googles and Microsofts as hugely important, shouldn't it be huge to all of us, too?

Because as we move to always-on social interactions across all we do on the Web, what we do socially online may begin to outweigh what we do socially offline. For some of us this is already true. What distinguishes us as online or offline is blurred, and I believe will grow more so and any difference will become irrelevant.

I am social, therefore I am social. It will not matter how or where. Yet online, the fabric of control over my social universe is more under the influence of the User Terms of Agreement than anything else. Will I lose any part at all of the personal freedoms won by my ancestors when I move my social activities online?

What defines any person by what they do online -- is this a business agreement based on User Terms of Agreement or something more defined by centuries-old social contracts and mores. Does freedom trump user agreements?

When would a concept like human freedom trump any user agreement, even if it is well documented in Delaware courts? Am I free to take my social graph data, that which defines me as me, with me anywhere online because it's an inalienable right? If so, I should not need any OpenSocial standards. It's self-frickin-evident! I should not need it in the User Terms of Agreement because it's long established as precedent.

But here's the rub that came to the surface this week when Microsoft crossed the Rubicon in the Web world with Live Search Cashback.

If users can and will assert that their social graph information is theirs by virtue of their culturally endowed freedom as a human, then what about their "commerce graph?" Who you are but what you buy is not too much different as who you are buy whom you associate with. Is commerce social, or is being social commerce?

My social graph contains my person meta data and my index of contacts, their context to me, and what actually defines me as a social creature. My commerce graph exists too, it's on Amazon, Walmart.com, and dozens of other vendors that know me by how I shop, learn, peruse, compare and perhaps buy. If I search as part of the shopping process then my commerce graph is on Google, Yahoo! and Microsoft (mostly on Google). I do commerce through my social activities, and I may want a social network with those I buy from and sell to.

All this user intentions and activities information is related and should not be separated. I should be able to mix and match my data regardless of the server. I reached those servers through my own device and browser, I made those clicks and punched those keys on my machine before they showed up on someone else's. I own my actions as a free human.

Microsoft is now finding ways to build out a business model via Live Search Cashback (with more to come no doubt) that takes your commerce graph and in essence, sells or barters it to the sellers of goods and services. I'm not saying this is in any way bad, or unproductive. It seems a logical outcome of all that has preceded it online. I expect others to follow suit.

But it does have me wondering. Who owns my commerce graph? Isn't it connected to my social graph? And if Microsoft can make money off of it, why can't I? Can I only make money off of my commerce graph when I use only a certain providers' services and only through its partners? If so, then it's not really my commerce graph. I'm only as free as the User Terms of Agreement say.

If my social graph is mine, and I can move and use it freely, then I surely will want the same to be true for my commerce graph (or any other user pattern graph). This is an essential unalienable right, but I think I want it in writing.

So, please, in order for any of us progeny of Western Civilization to use any of these burgeoning online services, can we have all of this freedom business spelled out clearly in the User Terms of Agreement?

Let's make it the first line item for all online agreements from now on: "Dear User, You are a human and you are free and so that also pertains to everything you do on our Web sites and services."

Until we have technical standards or neutral agencies to route and offer our control over our own use data, then we should all insist on better User Terms of Agreement, those that spell out the obvious. We are free, our data is ours, we should be able to control it.

Wednesday, May 21, 2008

ZoomInfo spins off 'bizographic' platform for controlled circulation online advertising play

Business information provider ZoomInfo has spun off its advertising business units in a new company, Bizo, offering a targeted B2B advertising platform, or what it calls "bizographic" advertising.

Privately held and venture-backed ZoomInfo, Waltham, Mass., announced a new set of business segments last fall, but has now taken the additional step of spinning the unit out. Former general manager and senior vice president Russell Glass will serve as CEO of the new company, which is expected to launch later this year. [Disclosure: ZoomInfo has been a sponsor of some BriefingsDirect B2B podcasts and videocasts that I have produced.]

Bizographic advertising, as ZoomInfo explains it, provides highly targeted demographic and behavioral advertising, allowing marketers to target their online advertising based on the audience of a site instead of the content.

For example, if a company wants to reach technology decision makers for an IT product offering or high-income individuals for a platinum credit card offer, it could use bizographic advertising to target directors of IT or CEOs respectively.

The field has heated up recently as CBS intends to acquire CNET (parent company of this blog's host, ZDNet) and it's BNET division, which also slices and dices audiences by work and functional definitions for the benefit of advertising targeting. Could Bizo also be on the block?

According to ZoomInfo officials, Bizo will continue to leverage the company’s understanding of business people and companies to allow marketers to target business users based on thousands of segmenting possibilities, including combinations of title, company, industry, functional area, company size, education, location, etc. The company expects over 20 million targetable business users in its network, when it launches.

Bryan Burdick, ZoomInfo's president explained the move:

"While B2B advertising is complimentary to ZoomInfo’s business, the market has been starved for the ability to target business professionals online. Creating a new business in order to meet that need was an ideal solution for us."

I gave my readers a head's up on what I called "controlled circulation advertising" last December, referring specifically to ZoomInfo:

ZoomInfo is but scratching the surface of what can be an auspicious third (but robust) leg on the B2B web knowledge access stool. By satisfying both seekers and providers of B2B information on business needs, ZoomInfo can generate web page real estate that is sold at the high premiums we used to see in the magazine controlled circulation days. Occupational-based searches for goods, information, insights and ongoing buying activities is creating the new B2B controlled circulation model.

ZoomInfo, a business information search engine, finds information about industries, companies, people, products and services. The company’s semantic search engine continually crawls millions of company Websites, news feeds and other online sources to identify company and people information, which is then organized into profiles.

ZoomInfo currently has profiles on nearly 40 million people and over 4 million companies, and its search engine adds more than 20,000 new profiles every day.