Showing posts with label search. Show all posts
Showing posts with label search. Show all posts

Wednesday, December 14, 2016

WWT took an enterprise Tower of Babel and delivered comprehensive intelligent search

The next BriefingsDirect Voice of the Customer digital transformation case study highlights how World Wide Technology, known as WWT, in St. Louis, found itself with a very serious yet somehow very common problem -- users simply couldn’t find relevant company content.

We'll explore how WWT reached deep into its applications, data, and content to rapidly and efficiently create a powerful Google-like, pan-enterprise search capability. Not only does it search better and empower users, the powerful internal index sets the stage for expanded capabilities using advanced analytics to engender a more productive and proactive digital business culture.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how WWT took an enterprise Tower of Babel and delivered cross-applications intelligent search are James Nippert, Enterprise Search Project Manager, and Susan Crincoli, Manager of Enterprise Content, both at World Wide Technology in St. Louis. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems pretty evident that the better search you have in an organization, the better people are going to find what they need as they need it. What holds companies back from delivering results like people are used to getting on the web?

Nippert
Nippert:  It’s the way things have always been. You just had to drill down from the top level. You go to your Exchange, your email, and start there. Did you save a file here? "No, I think I saved it on my SharePoint site," and so you try to find it there, or maybe it was in a file directory.

Those are the steps that people have been used to because it’s how they've been doing it their entire lives, and it's the nature of beast as we bring more and more enterprise applications into the fold. You have enterprises with 100 or 200 applications, and each of those has its own unique data silos. So, users have to try to juggle all of these different content sources where stuff could be saved. They're just used to having to dig through each one of those to try to find whatever they’re looking for.

Gardner: And we’ve all become accustomed to instant gratification. If we want something, we want it right away. So, if you have to tag something, or you have to jump through some hoops, it doesn’t seem to be part of what people want. Susan, are there any other behavioral parts of this?

Find the world

Crincoli: We, as consumers, are getting used to the Google-like searching. We want to go to one place and find the world. In the information age, we want to go to one place and be able to find whatever it is we’re looking for. That easily transfers into business problems. As we store data in myriad different places, the business user also wants the same kind of an interface.

Crincoli
Gardner: Certain tools that can only look at a certain format or can only deal with certain tags or taxonomy are strong, but we want to be comprehensive. We don’t want to leave any potentially powerful crumbs out there not brought to bear on a problem. What’s been the challenge when it comes to getting at all the data, structured, unstructured, in various formats?

Nippert: Traditional search tools are built off of document metadata. It’s those tags that go along with records, whether it’s the user who uploaded it, the title, or the date it was uploaded. Companies have tried for a long time to get users to tag with additional metadata that will make documents easier to search for. Maybe it’s by department, so you can look for everything in the HR Department.

At the same time, users don’t want to spend half an hour tagging a document; they just want to load it and move on with their day. Take pictures, for example. Most enterprises have hundreds of thousands of pictures that are stored, but they’re all named whatever number the camera gave, and they will name it DC0001. If you have 1,000 pictures named that you can't have a successful search, because no search engine will be able to tell just by that title -- and nothing else -- what they want to find.

Gardner: So, we have a situation where the need is large and the paybacks could be large, but the task and the challenge are daunting. Tell us about your journey. What did you do in order to find a solution?

Nippert: We originally recognized a problem with our on-premises Microsoft SharePoint environment. We were using an older version of SharePoint that was running mostly on metadata, and our users weren’t uploading any metadata along with their internet content.
Your average employee can spend over an entire work week per year searching for information or documentation that they need to get their job done.

We originally set out to solve that issue, but then, as we began interviewing business users, we understood very quickly that this is an enterprise-scale problem. Scaling out even further, we found out it’s been reported that as much as 10 percent of staffing costs can be lost directly to employees not being able to find what they're looking for. Your average employee can spend over an entire work week per year searching for information or documentation that they need to get their job done.

So it’s a very real problem. WWT noticed it over the last couple of years, but as there is the velocity in volume of data increase, it’s only going to become more apparent. With that in mind, we set out to start an RFI process for all the enterprise search leaders. We used the Gartner Magic Quadrants and started talks with all of the Magic Quadrant leaders. Then, through a down-selection process, we eventually landed on HPE.

We have a wonderful strategic partnership with them. It wound up being that we went with the HPE IDOL tool, which has been one of the leaders in enterprise search, as well as big data analytics, for well over a decade now, because it has very extensible platform, something that you can really scale out and customize and build on top of. It doesn’t just do one thing.
Humanizes Machine Learning
For Big Data Success
Gardner: And it’s one solution to let people find what they're looking for, but when you're comprehensive and you can get all kinds of data in all sorts of apps, silos and nooks and crannies, you can deliver results that the searching party didn’t even know was there. The results can be perhaps more powerful than they were originally expecting.

Susan, any thoughts about a culture, a digital transformation benefit, when you can provide that democratization of search capability, but maybe extended into almost analytics or some larger big-data type of benefit?

Multiple departments

Crincoli: We're working across multiple departments and we have a lot of different internal customers that we need to serve. We have a sales team, business development practices, and professional services. We have all these different departments that are searching for different things to help them satisfy our customers’ needs.

With HPE being a partner, where their customers are our customers, we have this great relationship with them. It helps us to see the value across all the different things that we can bring to bear to get all this data, and then, as we move forward, what we help people build more relevant results.

If something is searched for one time, versus 100 times, then that’s going to bubble up to the top. That means that we're getting the best information to the right people in the right amount of time. I'm looking forward to extending this platform and to looking at analytics and into other platforms.
That means that we're getting the best information to the right people in the right amount of time.

Gardner: That’s why they call it "intelligent search." It learns as you go.

Nippert: The concept behind intelligent search is really two-fold. It first focuses on business empowerment, which is letting your users find whatever it is specifically that they're looking for, but then, when you talk about business enablement, it’s also giving users the intelligent conceptual search experience to find information that they didn’t even know they should be looking for.

If I'm a sales representative and I'm searching for company "X," I need to find any of the Salesforce data on that, but maybe I also need to find the account manager, maybe I need to find professional services’ engineers who have worked on that, or maybe I'm looking for documentation on a past project. As Susan said, that Google-like experience is bringing that all under one roof for someone, so they don’t have to go around to all these different places; it's presented right to them.

Gardner: Tell us about World Wide Technology, so we understand why having this capability is going to be beneficial to your large, complex organization?
Humanizes Machine Learning
For Big Data Success
Crincoli: We're a $7-billion organization and we have strategic partnerships with Cisco, HPE, EMC, and NetApp, etc. We have a lot of solutions that we bring to market. We're a solution integrator and we're also a reseller. So, when you're an account manager and you're looking across all of the various solutions that we can provide to solve the customer’s problems, you need to be able to find all of the relevant information.

You probably need to find people as well. Not only do I need to find how we can solve this customer’s problem, but also who has helped us to solve this customer’s problem before. So, let me find the right person, the right pre-sales engineer or the right post-sales engineer. Or maybe there's somebody in professional services. Maybe I want the person who implemented it the last time. All these different people, as well as solutions that we can bring in help give that sales team the information they need right at their fingertips.

It’s very powerful for us to think about the struggles that a sales manager might have, because we have so many different ways that we can help our customer solve those problems. We're giving them that data at their fingertips, whether that’s from Salesforce, all the way through to SharePoint or something in an email that they can’t find from last year. They know they have talked to somebody about this before, or they want to know who helped me. Pulling all of that information together is so powerful.

We don’t want them to waste their time when they're sitting in front of a customer trying to remember what it was that they wanted to talk about.

Gardner: It really amounts to customer service benefits in a big way, but I'm also thinking this is a great example of how, when you architect and deploy and integrate properly on the core, on the back end, that you can get great benefits delivered to the edge. What is the interface that people tend to use? Is there anything we can discuss about ease of use in terms of that front-end query?

Simple and intelligent

Nippert: As far as ease of use goes, it’s simplicity. If you're a sales rep or an engineer in the field, you need to be able to pull something up quickly. You don’t want to have to go through layers and layers of filtering and drilling down to find what you're looking for. It needs to be intelligent enough that, even if you can’t remember the name of a document or the title of a document, you ought to be able to search for a string of text inside the document and it still comes back to the top. That’s part of the intelligent search; that’s one of the features of HPE IDOL.

Whenever you're talking about front-end, it should be something light and something fast. Again, it’s synonymous with what users are used to on the consumer edge, which is Google. There are very few search platforms out there that can do it better. Look at the  Google home page. It’s a search bar and two buttons; that’s all it is. When users are used to that at home and they come to work, they don’t want a cluttered, clumsy, heavy interface. They just need to be able to find what they're looking for as quickly and simply as possible. 

Gardner: Do you have any examples where you can qualify or quantify the benefit of this technology and this approach that will illustrate why it’s important?
It’s gotten better at finding everything from documents to records to web pages across the board; it’s improving on all of those.

Nippert: We actually did a couple surveys, pre- and post-implementation. As I had mentioned earlier, it was very well known that our search demands weren't being met. The feedback that we heard over and over again was "search sucks." People would say that all the time. So, we tried to get a little more quantification around that with some surveys before and after the implementation of IDOL search for the enterprise. We got a couple of really great numbers out of it. We saw that people’s satisfaction with search went up by about 30 percent with overall satisfaction. Before, it was right in the middle, half of them were happy, half of them weren’t.

Now, we're well over 80 percent that have overall satisfaction with search. It’s gotten better at finding everything from documents to records to web pages across the board; it’s improving on all of those. As far as the specifics go, the thing we really cared about going into this was, "Can I find it on the first page?" How often do you ever go to the second page of search results.

With our pre-surveys, we found that under five percent of people were finding it on the first page. They had to go to second or third page or four through 10. Most of the users just gave up if it wasn’t on the first page. Now, over 50 percent of users are able to find what they're looking for on the very first page, and if not, then definitely the second or third page.

We've gone from a completely unsuccessful search experience to a valid successful search experience that we can continue to enhance on.

Crincoli: I agree with James. When I came to the company, I felt that way, too -- search sucks. I couldn’t find what I was looking for. What’s really cool with what we've been able to do is also review what people are searching for. Then, as we go back and look at those analytics, we can make those the best bets.

If we see hundreds of people are searching for the same thing or through different contexts, then we can make those the best bets. They're at the top and you can separate those things out. These are things like the handbook or PTO request forms that people are always searching for.

Gardner: I'm going to just imagine that if I were in the healthcare, pharma, or financial sectors, I'd want to give my employees this capability, but I'd also be concerned about proprietary information and protection of data assets. Maybe you're not doing this, but wonder what you know about allowing for the best of search, but also with protection, warnings, and some sort of governance and oversight. 

Governance suite

Nippert: There is a full governance suite built in and it comes through a couple of different features. One of the main ones is induction, where as IDOL scans through every single line of a document or a PowerPoint slide of a spreadsheet whatever it is, it can recognize credit card numbers, Social Security numbers anything that’s personally identifiable information (PII) and either pull that out, delete it, send alerts, whatever.

You have that full governance suite built in to anything that you've indexed. It also has a mapped security engine built in called Omni Group, so it can map the security of any content source. For example, in SharePoint, if you have access to a file and I don’t and if we each ran a search, you would see a comeback in the results and I wouldn’t. So, it can honor any content’s security.  

Gardner: Your policies and your rules are what’s implemented, and that’s how it goes?

Nippert: Exactly. It is up to as the search team or working with your compliance or governance team to make sure that that does happen.

Gardner: As we think about the future and the availability for other datasets to be perhaps brought in, that search is a great tool for access to more than just corporate data, enterprise data and content, but maybe also the front-end for some advanced querying analytics, business intelligence (BI), has there been any talk about how to take what you are doing in enterprise search and munge that, for lack of a better word, with analytics BI and some of the other big data capabilities.
It is going to be something that we can continue to build on top of, as well and come up with our own unique analytic solutions.

Nippert: Absolutely. So HPE has just recently released BI for Human Intelligence (BIFHI), which is their new front end for IDOL and that has a ton of analytics capabilities built into it that really excited to start looking at a lot of rich text, rich media analytics that can pull the words right off the transcript of an MP4 raw video and transcribe it at the same time. But more than that, it is going to be something that we can continue to build on top of, as well and come up with our own unique analytic solutions.

Gardner: So talk about empowering your employees. Everybody can become a data scientist eventually, right, Susan?

Crincoli: That’s right. If you think about all of the various contexts, we started out with just a few sources, but we also have some excitement because we built custom applications, both for our customers and for our internal work. We're taking that to the next level with building an API and pulling that data into the enterprise search that just makes it even more extensible to our enterprise.

Gardner: I suppose the next step might be the natural language audio request where you would talk to your PC, your handheld device, and say, "World Wide Technology feed me this," and it will come back, right?

Nippert: Absolutely. You won’t even have to lift a finger anymore.

Cool things

Crincoli: It would be interesting to loop in what they are doing with Cortana at Microsoft and some of the machine learning and some of the different analytics behind Cortana. I'd love to see how we could loop that together. But those are all really cool things that we would love to explore.

Gardner: But you can’t get there until you solve the initial blocking and tackling around content and unstructured data synthesized into a usable format and capability.
Humanizes Machine Learning
For Big Data Success
Nippert: Absolutely. The flip side of controlling your data sources, as we're learning, is that there are a lot of important data sources out there that aren’t good candidates for enterprise search whatsoever. When you look at a couple of terabytes or petabytes of MongoDB data that’s completely unstructured and it’s just binaries, that’s enterprise data, but it’s not something that anyone is looking for.

So even though our original knee-jerk is to index everything, get everything to search, you want to able to search across everything. But you also have to take it with a grain of salt. A new content source could be hundreds or thousands of results that could potentially clutter the accuracy of results. Sometimes, it’s actually knowing when not to search something.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Friday, June 27, 2008

Coveo G2B for CRM provides search-based single customer view from disparate content sources

As companies search for the holy grail of a "single view" of the customer, Coveo Solutions, Inc., which provides search-powered enterprise information access, has unveiled its Coveo G2B for CRM, a way to provide a view of all relevant customer data from a wide variety of sources.

G2B for CRM brings together data from such sources as salesforce.com, Siebel Systems, corporate intranets, tech support emails, customer support databases, and enterprise resource planning (ERP) systems.

It also provides advanced content analytics, giving workers the ability to present customer data graphically. Presenting customer data as a spreadsheet or a pie chart allows management or workers in planning, forecasting, and resource management. This can eliminate the need for time-consuming database queries and reporting, even when sifting through millions of documents.

Coveo's approach shows the productivity benefits of enterprise search continue to be explored. Google (appliances), Microsoft (with FAST Search technology) and Autonomy certainly think so.

Newton, Mass.- and Quebec-based Coveo G2B for CRM, built on the Coveo Enterprise Search-platform technology, is part of the company's G2B Information Access Suite, which allows knowledge workers to obtain a unified view of enterprise information.

I'm interested in seeing more mashups of search from across many enterprise and web-based providers (including social networks) to give even more complete and vetted views of customers, suppliers, partners, employees and any others that relate to business activities or ecologies. The information is out there, just waiting to be harvested and managed.

And when are we going to get a single view of IT assets in association with business processes? Increasingly searching IT devices and resources is playing a role in enterprise search too. How about not only getting a single view of the customer but also instant views to the right systems to reach out to them through, or the right integration avenues?

Let's search people and systems and gather insights to the systems context of business along with the social aspects. People, process, systems and search. That's the ticket.

Wednesday, May 21, 2008

ZoomInfo spins off 'bizographic' platform for controlled circulation online advertising play

Business information provider ZoomInfo has spun off its advertising business units in a new company, Bizo, offering a targeted B2B advertising platform, or what it calls "bizographic" advertising.

Privately held and venture-backed ZoomInfo, Waltham, Mass., announced a new set of business segments last fall, but has now taken the additional step of spinning the unit out. Former general manager and senior vice president Russell Glass will serve as CEO of the new company, which is expected to launch later this year. [Disclosure: ZoomInfo has been a sponsor of some BriefingsDirect B2B podcasts and videocasts that I have produced.]

Bizographic advertising, as ZoomInfo explains it, provides highly targeted demographic and behavioral advertising, allowing marketers to target their online advertising based on the audience of a site instead of the content.

For example, if a company wants to reach technology decision makers for an IT product offering or high-income individuals for a platinum credit card offer, it could use bizographic advertising to target directors of IT or CEOs respectively.

The field has heated up recently as CBS intends to acquire CNET (parent company of this blog's host, ZDNet) and it's BNET division, which also slices and dices audiences by work and functional definitions for the benefit of advertising targeting. Could Bizo also be on the block?

According to ZoomInfo officials, Bizo will continue to leverage the company’s understanding of business people and companies to allow marketers to target business users based on thousands of segmenting possibilities, including combinations of title, company, industry, functional area, company size, education, location, etc. The company expects over 20 million targetable business users in its network, when it launches.

Bryan Burdick, ZoomInfo's president explained the move:

"While B2B advertising is complimentary to ZoomInfo’s business, the market has been starved for the ability to target business professionals online. Creating a new business in order to meet that need was an ideal solution for us."

I gave my readers a head's up on what I called "controlled circulation advertising" last December, referring specifically to ZoomInfo:

ZoomInfo is but scratching the surface of what can be an auspicious third (but robust) leg on the B2B web knowledge access stool. By satisfying both seekers and providers of B2B information on business needs, ZoomInfo can generate web page real estate that is sold at the high premiums we used to see in the magazine controlled circulation days. Occupational-based searches for goods, information, insights and ongoing buying activities is creating the new B2B controlled circulation model.

ZoomInfo, a business information search engine, finds information about industries, companies, people, products and services. The company’s semantic search engine continually crawls millions of company Websites, news feeds and other online sources to identify company and people information, which is then organized into profiles.

ZoomInfo currently has profiles on nearly 40 million people and over 4 million companies, and its search engine adds more than 20,000 new profiles every day.

Splunk goes virtual, unveils broad IT search capabilities for Citrix XenServer

Splunk, which provides indexing and search technology for IT infrastructures, this week made its move into the virtual realm with the announcement of Splunk for Citrix XenServer Management.

The San Francisco company says this is just its first foray into search support services for virtualization and that it will release similar applications for each of the leading server virtualization platforms in the near future. [Disclosure: Splunk is a sponsor of BriefingsDirect podcasts.]

The Splunk announcement comes during a Citrix cavalcade of news and developments, including the expected delivery of its desktop as a service portfolio.

While server virtualization provides significant efficiency and utilization improvement benefits to datacenters, it also brings complexity in troubleshooting glitches. Performance and capacity issues can arise when applications share the same physical host. With multiple virtual machines (VMs) sharing a pool of server, storage and network resources, changes to any one layer or VM could potentially affect others – and the applications they contain. Root cause analysis is even more of a challenge when instances of virtualized containers and runtimes pop in and out of use via dynamic provisioning.

Splunk indexing and search approach aims to provide a full view of IT-generated use data, not only from the hypervisor and VM, but from the server, guest operating system, applications, and the network. Splunk’s technology indexes data across all tiers of the infrastructure in near real-time. This allows operators and administrators to maintain a large, dynamic IT environment with fewer people, with higher automation and easier service performance management.

Splunk for Server Virtualization Management supports virtualization planning, workload optimization, performance monitoring, root cause analysis and log management, says the company.

The new product is available immediately. Users can download a free 30-day trial from the company's Web site.

Splunk has been in the news lately, and on Monday announced that communications provider BT has agreed to license Splunk's IT search platform technology to build a managed-security product that will allow customers to preserve 100 percent of the logs on a network.

Three weeks ago, the company unveiled Splunk for Change Management, an application to audit and detect configuration and changes, and Splunk for Windows, which indexes all data generated by Windows servers and applications.

Tuesday, April 29, 2008

Splunk adds change-management and Windows support to IT search software

IT search company Splunk today added to its arsenal of tools for IT managers with the launch of Splunk for Change Management, an application to audit and detect configuration and changes, and Splunk for Windows, which indexes all data generated by Windows servers and applications.

The San Francisco company provides a platform for large-scale, high-speed indexing and search technology geared toward IT infrastructures. The software, which comes in both free and enterprise versions, allows a company to search and navigate data from any application, server, or network device in real time. [Disclosure: Splunk is a sponsor of BriefingsDirect podcasts.]

Splunk for Change Management, which requires an enterprise license, continuously audits all configurations and changes, detects unauthorized changes, validates change deployment, and discovers service-impacting changes during incident response.

The new application leverages the existing Splunk Platform, allowing users to combine change audit events, configuration data, activity and error logs, and actual system and user behavior. This differentiates it from the traditional approach, which is often disconnected from incident response and cut off from other sources of IT data.

Among the features of the new product are:
  • Out-of-the box dashboards with over 40 reports showing changes across all datacenter components including applications, servers and network devices.
  • Predefined alerts that detect unauthorized change based on configuration variances and correlation with service desk systems.
  • Predefined searches to help identify service-impacting changes
  • Integration with service desk systems that validates the effect of change on system behavior.
Splunk for Windows, a free application, integrates Splunk's IT search with Microsoft's System Center Operations Manager's command and control view of the Windows infrastructure.

Splunk indexes event logs, registry keys, performance metrics, and applications log files, making all the data searchable from a single place.

Reports and dashboards included in the application provide a bird's eye view of service levels and problems across a large number of servers and applications, and predefined alerts can warn of cross-component problems.

Splunk has a variety of solutions for IT managers and developers who need some visibility into their various systems and components. Just a few weeks ago, I wrote about the Splunk Platform.

"The Splunk Platform and associated ecosystem should quickly grow the means to bridge the need for transparency between runtime actualities and design-time requirements. When developers can easily know more about what applications and systems do in the real world in real time, they can make better decisions and choices in the design and test phases. This obviously has huge time- and money-saving implications."

And, more than two years ago, I did a podcast about Splunk, when it launched the Splunk Base, an open Creative Commons-licensed repository of Wikis that with volume adoption to give systems troubleshooters a searchable library of knowledge about what ails IT components and how to swiftly remedy those ills. You can listen to the podcast here.

Splunk for Change Management pricing starts at $4,000 and requires an enterprise license. A 30-day free trial is available.

Splunk for Windows is free and is now available on the Splunk Base site.

Sunday, April 6, 2008

IT search and SCM search may together bridge the design time-run time divide

Productivity improvements in software development and deployment strategies will ultimately have to reckon with the lingering lack of feedback between design time and run time.

Software is still a hand-off affair, with developed applications getting tossed into production with little collaboration between the builders and the operators -- before or after the hand-off.

Things could and should be different. Thanks to search-based technologies and services now entering the market, we may be on the verge of a new productivity boomlet that leverages more needles from more haystacks.

With proper access to information about how code actually behaves in real-world use, developers could better produce reliable applications and infrastructure. Architects and systems operators could better anticipate how to meet the demands and service level agreements (SLAs) of quickly provisioned applications if they had greater visibility into the hit on resources -- and potential disruptions -- from newly minted applications and services. Virtualization will only exacerbate the deployment complexities.

Wouldn't it be beneficial then if the information about what goes on "on the other side" were made available proactively to each side of the equation? Search functions applied directly to both sides of the development and deployment fence would allows those open a bright new window into what remains murky and mysterious from the outside.

Developers with proper access to indexes and meta data could use search to quickly find highly specific information about run time environments and stacks as they write and test their code, and as they seek out the best components, objects and methods suitable for specific runtime scenarios.

Conversely, operators faced with slow-running applications -- or worse -- could search into the source code for clues about root causes of glitches, and much easier and faster identify and remediate the problems. They could also clearly point out the impactful issues back to the development and test teams, to prevent the glitch from recurring in the future.

We're already seeing a great deal of value from operations-side search, and the extension of that value due to platform approaches, APIs and open collaboration. Splunk is providing a path on IT search as a platform. [Disclosure: Splunk is a sponsor of BriefingsDirect podcasts, including this one on Splunk Base.]

Other vendors are emerging to fruitfully employee search to the source code management (SCM) space, such as Krugle. Krugle offers a search benefit for open source assets, as well as enterprise development assets.

Now when we can jibe software characteristics and collaborate across the design time-run time divide based on the type of insight that the likes of Splunk and Krugle provide, well then we'll be in a better software age. It may be sooner than we think.

Friday, December 7, 2007

ZoomInfo offers more evidence of a 'controlled circulation' advertising benefit quickly emerging on the web

Get ready for new "controlled circulation" models on the web, ones that target you based not on your preferences for music or soft drinks -- but on what you consume in your occupation. Think of it as B2B social networking.

First, some set-up ... One of the great media inventions of the mid-20th century was the notion of affinity-based, controlled circulation publishing. Those creating magazine titles that catered to defined groups -- rather than mass media volume plays like network television -- went granular.

By focusing on concentrated audiences, these publishers walled up "universes" of buyers that passionately sought specific information as defined by discrete hobbies or occupations. Bill Ziff Jr. honed in on the hobbies, and grew a media empire on titles that linked up dedicated buyers -- of things like electronics kits, models, automobiles (and the jackpot, personal computers) -- to the sellers of the actual goods behind the passion. The ads inside these special interest pubs generated high premiums, based on the tight match between engaged (and well monied) buyers and drooling sellers.

Norm Cahners took the model in the direction of industrial business niches. He provided free monthly magazines based on slices of industrial minutiae that delivered useful albeit dry information to those specifiers of myriad corporate goods and services. You order gizmos for your buggy whips? You probably spend millions of dollars on procurement per each kind of good per year. Let me introduce you to some sellers of those goods who want to make you a deal.

The Cahners Publishing magazines -- on things like plastics use, integrated circuits developments, materials handling and design engineering -- were free to readers, as long as those readers identified themselves as corporate decision makers with budget to spend. Again, high ad premiums could be charged by linking engaged readers (with huge annual budgets) with advertisers who needed reach hard-to-find and shifting groups of corporate buyers.

Soon the burgeoning lists of these readers, sliced and diced by buying needs, and sanctified by audit bureaus as valid (mostly), became very, very valuable. As a controlled circulation publisher, if you had the top one of two monthly magazine titles that generated the definitive list of those buying all the industrial values, say, in North America -- you were sitting pretty. You controlled the circulation, defined and refined the audience, and so told the sellers how much they needed to pay you to reach those buyers. You priced high, but still less than these sellers would need to spend to send a warm body carrying a bad into each and every account (on commission).

In effect, the controlled circulation publishers collected straight commissions on billions of dollars in commercial and special interest goods being bought and sold. They were a virtual sales team for all kinds of sellers. Editorial was cheap. Life was good.

And then 10 years ago the Web came along and pretty much began to blow the whole thing apart. Engaged users started using Web search, and explored their vendors' web sites on their own. Vendors could reach users directly, and used their websites as virtual sales forces too. Soon there were wikis that listed all the sellers of goods in certain arenas of goods and services. Those seeking business or hobby information could side-step the editorial middleman and go direct to the buying information on goods and services they wanted. We're only into the opening innings on this, by the way.

But the same disruption that plagues newspapers like the San Jose Mercury News and The Boston Globe -- both of which should be doing great based on their demographic reach -- is undermining the trade media too. It's the web. It's search. It's sidestepping the traditional media as a means to bind buyers and sellers. The web allows the sellers to find the buyers, and the buyers to find the sellers with less friction, less guessing, less cost. Fewer middlemen.

And this means the end of controlled circulation has we have know it. ... Or does it?

Just as the web made has made it a lot harder for media companies to charge a premium for advertisers to reach a defined universe of some sort, the web could also allow for a need breed of controlled circulation, one that generates "universes" on the fly based on special interest search, not based on special interest magazines.

The current web ad model has evolved to be based on blind volume display ads, with the hope of odd click-throughs, usually of less the 0.5 percent of the total banner ads displayed. Advertisers know exactly what their ad dollar gets them, and it's not enough. Even when seekers click on ads, they usually get sent to a home page that was just as easily reached through keyword searches from a web search provide (for free), based on their real interests. Enter Google. And you know the rest.

Why the history lesson? Because we're now beginning to see some new variations on the controlled circulation theme on the web that create additional models. Controlled circulation could be back. It that could mean much bigger ad bucks than web display ads or even key-word-based ads can generate. It's what has Microsoft gaga over Facebook. And News Corp. gaga over MySpace. And Viacom beside itself because it has no such functional base yet.

Controlled circulation is coming to the web on one level via social networks, mostly for consumer goods and services -- sort of what Bill Ziff did for hobbyists in the 1950s and 1960s. Social networks like Facebook and MySpace endear their member users to cough up details about themselves -- just like controlled circulation publishers used to require for readers to get free magazines on specific topics. Based on the need to expose yourself on a social network to get, well ... social ... you therefore provide a lot of demographic details that can then be carved up into the equivalent of controlled circulation universes. Based on your declared consumer wants, fad preferences, age and location, you give advertisers a means to target you.

This model is only just now being probed for its potential, as the Beacon trial-and-error process as Facebook these days attests. Soon, however, an accepted model will emerge for binding consumers and sellers of goods and services, a model better than banner ads, one that can go granular on user preferences (but not too granular, lest privacy bugaboos rear their paranoid heads). When this model is refined, everyone from Microsoft to Yahoo to Google and Time Warner will need to emulate it in some fashion. It will be the third leg on the web ads tool: display, search-based, and now reader-profile constructed controlled circulation.

Which brings me to ZoomInfo. (Disclosure: ZoomInfo has been a sponsor of some BriefingsDirect B2B podcasts and videocasts that I have produced). What's so far missing in all of the Facebook hysteria is the Norm Cahners part, of how to take the emerging controlled circulation web model and apply it to multi-trillion dollar B2B global markets. How to slice and dice all the companies out there with goods and services you -- as a business buyer -- need to know about? Instead of the users giving up profile information on themselves as a way of providing profile-constructed controlled circulation, why not let the companies provide the profiles that the users can access via defined searches based on their actual needs?

Wade Rouch over at Xconomy gives us a glimpse of this model based on what ZoomInfo is now doing with "business demographics" or what Zoom calls Bizographics. This is the B2B side of what social networks are doing on the consumer side, but with a twist. By generating the lists of businesses that provide goods and services sough via a search, and even more lists of the goods themselves, users can educate themselves and the bond between B2B buyers and sellers is made and enriched. All's that's needed is the right kinds of searches that define the universe of providers that users can then explore and engage with.

ZoomInfo is but scratching the surface of what can be an auspicious third (but robust) leg on the B2B web knowledge access stool. By satisfying both seekers and providers of B2B information on business needs, ZoomInfo can generate web page real estate that is sold at the high premiums we used to see in the magazine controlled circulation days. Occupational-based searches for goods, information, insights and ongoing buying activities is creating the new B2B controlled circulation model.

What's more, these defined B2B universes on the fly based on occupations and buying needs amounts to giving more power to the users via what Doc Searls correctly calls Vendor Relationship Management. It's a fascinating concept we'll be seeing a lot more of: Matching buyers and sellers on the web based on their mutual best interests. Mr. buyer, please find Mr. Seller -- on your terms, based on your needs.

Monday, September 17, 2007

Survey uncovers heightening reliance on search across business purchasing

Listen to the podcast. Or read a full transcript. Sponsor: ZoomInfo.

It seems that businesses, whether they're small or global 2000 concerns, are buying more supplies using search at some point in the B2B procurement process. Some people begin and end a procurement journey with search. They actually buy the products through a strictly search-dependent process.



Yet many still use a combination of word-of-mouth, search, and traditional information gathering to guide them to the best deals on the most goods.

To find out just how much B2B buying behaviors are shifting, Enquiro Search Solutions conducted a survey earlier in 2007. They found that online search was consistently employed throughout the entire buying process, from awareness right through to purchase.

There’s still a lot of back and forth: Offline factors influence online activity, and vice-versa, for a merging of the online and the offline worlds. In an audio podcast discussion, as well as the accompanying BriefingsDirect multi-media video-podcast, I helped plumb the depths of Enquiro's findings and then vetted them through the experiences of B2B search engine ZoomInfo.

Join Gord Hotchkiss, President and CEO of Enquiro, and Bryan Burdick, COO of ZoomInfo, with moderation by myself, Dana Gardner, for a deep dive on B2B search trends and analysis.

Here are some excerpts:
We did the original survey in 2004 and, at the time, there wasn't a lot of research out there about search in general, even on the consumer side. There was virtually nothing on the B2B side. The first survey ... certainly proved that search was important. We found that online activity, in particular that connected with search activity, was consistent in a large percentage of purchases. In 2007, we added more insight to the methodology. We wanted to understand the different roles that are typical in B2B purchases -- economic buyers versus technical buyers versus user buyers. We also wanted to get more understanding of the different phases of the buying cycle.

As far as the main takeaways from the study, obviously online activity is more important than ever. In fact, we asked respondents to indicate from a list of over 30 influencers what was most important to them in making the purchase decision. Online factors, such as interaction with the vendor Website and interaction with the search engine were right up there with the traditional winner, word of mouth. What we see is a real link between those and looking for objective information and specific detail.

We did notice an evolution of behavior as you move through the funnel, and the nature of the interactions with the different online resources changes how you navigate to them and how you go to different sites for information. But, online research was consistent through the entire process, from awareness right through to purchase. There’s a lot of back and forth. ... We saw a merging of the online and the offline worlds in making these decisions and trying to come to what’s the right decision for your company or what’s the right product or service.

We just found increased reliance on online to do that research. When we say "increased reliance," we're probably talking 10 percentage points up over the three years. So, if 65 percent of the people were doing it in 2004, 75 percent of the people are doing it now. That’s primarily where we saw the trends going.

When we looked at the different phases of the buying cycle, it starts with awareness. You become aware that you need something. There was a high percentage of people -- in the high 60-percent range -- who said, "Once I become aware that I need something, the first place I'm going to go is the search engine to start looking for it." A lot of that traffic is going to end up on Google. It was the overwhelming choice among general search engines for B2B buyers.

But, as you move through the process, you start doing what we call a "landscape search." The first search is to get the lay of the land to figure out the information sites that have the information you are looking for. Who are the main vendors playing in this space? Where are the best bets to go and get more information to help make this purchase decision?

So, those first searches tend to be fairly generic -- shorter key phrases -- just to get the lay of the land to figure out where to go. As you progress, search tends to become more of a navigational shortcut, and we’ve seen this activity increase over the last two to three years. Increasingly, we're using search engines to get us from point A to point B online.

We also wanted to get a retroactive view of a successful transaction. So, in the second part of the survey, we asked them to recall a transaction they had made in the past 12 months. We wanted to see whether that initial search led to a successful purchase down the road, and, at the end of the road, how the different factors influenced them. So, we actually approached them from a couple of different angles.

Now, 85 percent of these people say they're using online search for some aspect of this purchasing process. It strikes me that this involves trillions of dollars worth of goods. These are big companies and, in some cases, buying lots of goods at over a hundred thousand dollars a whack. Do you concur that we're talking about trillions of dollars of B2B goods now being impacted significantly by the search process?

Absolutely. The importance of this is maybe the most mind-numbing fact to contemplate. Traditionally, the B2B space has been a little slow to move into the search arena. Traditionally, in the search arena, the big advertisers tend to be travel or financial products. B2B is just starting to understand how integral search is to all this activity. When you think of the nature of the B2B purchase, risk avoidance is a huge issue. You want to make sure that whatever decisions you make are well-researched and well-considered purchases. That naturally leads to a lot of online interaction.

The business information search is a primary factor driving [ZoomInfo's] growth. Our company right now is growing on two fronts. One is our traditional paid-search model, where we have subscription services focused on people information that is targeted at salespeople and recruiters as a source for candidates and prospects.

The more rapidly growing piece of our business is the advertising-driven business information search engine, which I think is a really interesting trend related to the concept you guys were just talking about. Not only does the B2B advertiser spend lots of money today trying to reach out, but the B2B searcher has new tools, services, and capabilities that provide a richer, better, more efficient search than they’ve had through the traditional search engines.

Everybody needs to be focused on search. I can’t see an exception. You mentioned the percentage that said they would go online. We segmented out the group that didn’t indicate they go online to see what was unique about them. The only thing unique about them was their age. They tended to be older buyers and tended to be with smaller organizations, where the CEO was more actively involved in the purchase decision. That was really the only variants we saw. If it’s a generational thing, then obviously that percentage is going to get smaller every year.

... From a vertical business information search perspective, that we’re really in the first inning here. A lot of interesting trends and enhancements are going to be coming down the road. One in particular that may have an influence in the next year or two is the community aspect within the search. ... I think that you’ll start to see a marriage of, not only B2B search, but also online community and a factoring into that whole process. Then, who knows where we’ll go from there? ... The word of community.
Listen to the podcast. Or read a full transcript. Sponsor: ZoomInfo.

Saturday, August 18, 2007

Lotus Notes 8 brings unified collaboration to mashupable clients

IBM made two announcements Friday that should help smooth the way for bringing unifed Web 2.0-style applications to the Notes/Domino-installed enterprise.

The new Lotus Notes 8 and Lotus Domino 8 releases merge collaboration, communication and productivity features into a single desktop environment, giving users integrated access to such things as RSS feeds and search, along with email, instant messaging, presence, word processing, spreadsheets, and presentation software.

At the same time, IBM announced Expeditor 6.1.1, an Eclipse-based mashups tool that forms the underpinings of Notes/Domino 8 and thereby allows mashups via managed clients to reach desktops, laptops, and mobile devices.

Regular readers of BriefingsDirect know that search is an emerging enterprise strategy of growing importance and that RSS feeds will provide a powerful tool for distributing and managing content and data. Combining them onto a single desktop with other Web 2.0 technologies and productivity applications is certainly a step in the best-of-all-worlds direction.

Incidentally, the means of bringing mashups to the enterprise is evolving along varying lines. IBM likes managed clients, no surprise there. But Serena Software will soon be providing an on-demand platform for mashups that's also intended for enterprise use.

It's no secret that the Lotus Notes UI has been, shall we say ... cumbersome over the years (since Notes 5?). IBM hopes to have cleared that hurdle with a new interface, featuring a sidebar that summarizes all the user's tools in one place, including the RSS feeds.

In fact, it's the new interface that's gotten the most positive feedback from customer tests, according to Ed Brill, Business Unit Executive, Worldwide Lotus Notes/Domino Sales, IBM Software Group (nice title, Ed; go for brevity, I always say). The new release has been in development for more than two years.

The addition of productivity tools, according to Brill, comes from the observation that the principal reason many users in the past have left the Notes application was to use a spreadsheet or word processing (Since, like ... 1989). With the new release, users will be able to do that without leaving Notes.

Imagine, just imagine, if SmartSuite had been natively integrated into Notes in, say, 1994. Things might have panned out a little differently. Oh, well.

One surprising outcome of the customer testing, Brill says, is the level of interest in customers wanting a Notes client for Linux. Nearly 20 percent of downloads during testing have been for Linux. Hint, hint! [How about a full virtualized desktop service based on Linux/Domino with mashups galore! Maybe some appliances along those lines. Works for me.]

Built on Eclipse, Lotus Expeditor 6.1.1, is designed to allow integrated mashups independent of the client technology. Among the key features of Expeditor are:

  • A server-managed composite platform to integrate and aggregate applications and information.
  • Integration with real-time collaboration.
  • Integration with IBM WebSphere Portlet Factory and IBM WebSphere Portal Express.
  • End-to-end government-grade mobile security.
  • The ability to transform Microsoft Visual Basic applications.
There will be those Web 2.0 purists who will smirk at the way IBM is bringing these functions to the market. But consider that enterprises do more integrated collaboration via Notes/Domino than just about any other system. And, importantly, it's a lot easier to bring Web 2.0 functionality into an existing enterprise IT icon, than to bring a Web 2.0 functionality into the enterprise all on its barely surviving greenfield start-up own.

Monday, July 30, 2007

SOA Insights analysts on Web 3.0, Google's role in semantics, and the future of UDDI

Listen to the entire podcast, or read a full transcript of the discussion.

The notion of a world wide web that anticipates a user's needs, and adds a more human touch to mere surfing and searching, has long been a desire and goal. Yet how closer are we to a more "semantic" web? Will such improvements cross over into how enterprises manage semantic data and content?

Our expert panel digs into this and other recent trends in SOA and enterprise IT architecture in the latest BriefingsDirect SOA Insights Edition, volume 17. Our group also examines Adobe's open source moves around Flex, and how UDDI is becoming more about politics than policy.

So join noted IT industry analysts Joe McKendrick, Jim Kobielus, Dave Linthicum and Todd Biske for our latest SOA podcast discussion, hosted and moderated by yours truly.

Here are some excerpts:
I saw one recent article where [the semantic web] was called Web 3.0, and I thought, “Oh, my Lord, we haven’t even decided that we are all in agreement on the notion of Web 2.0.”

[But] there is activity at the World Wide Web Consortium that’s been going on for a few years now to define various underlying standards and specifications, things like OWL and Sparkle and the whole RDF and Ontologies, and so forth.

So, what is the Semantic Web? Well, to a great degree, it refers to some super-magical metadata description and policy layer that can somehow enable universal interoperability on a machine-to-machine basis, etc. It more or less makes the meanings manifest throughout the Web through some self-description capability.

You can look at semantic interoperability as being the global oceanic concern. Wouldn’t be great if every single application, data base, or file that was ever posted by anybody anywhere on the Internet somehow, magically is able to declare its full structure, behavior, and expectations?

Then you can look at semantic interoperability in a well-contained way as being specific to a particular application environment within an intranet or within a B2B environment. ... The whole notion of a "semantic Web," to the extent that we can all agree on a definition, won’t really come to the fore until there is substantial deployment inside of enterprises.

Conceivably, the enterprise information integration (EII) vendors are providing a core piece of infrastructure that could be used to realize this notion of a Semantic Web, a way of harmonizing and providing a logical unified view of heterogeneous data sources.

Red Hat, one of the leading open source players, is very geared to SOA and building an SOA suite. Now, they are acquiring an EII vendor, which itself is very SOA focused. So, you’ve got SOA; you’ve got open source; you’ve got this notion of a semantic layer, and so forth. To me, it’s like, you’ve stirred it all together in the broth here.

That sounds like the beginnings of a Semantic Web that conceivably could be universal or “unversalizable,” because as I said, it’s open source first and foremost.

If we build on this, it does solve a lot of key problems. You end up dealing with universal semantics, how that relates to B2B domains, and how that relates to the enterprise domains.

As I'm deploying and building SOAs out there in my client base, semantic mediation ultimately is a key problem we’re looking to solve.

The average developer is still focused on the functionality of the business solution that they're providing. They know that they may have data in two different formats and they view it in a point-to-point fashion. They do what they have to do to make it work, and then go back to focusing on the functionality, not really seeing the broader semantic issues that come up when you take that approach.

One thing that’s going to happen with the influence of something like Google, which is having a ton of a push in the business right now, is that ultimately these guys are exposing APIs as services. ... They're coming to the realization that the developers that leverage these APIs need to have a shared semantic understanding out on the Web. Once that starts to emerge, you're going to see a push down on the enterprise, if that becomes the de-facto standard that Google is driving.

In fact, they may be in a unique position to create the first semantic clearing house for all these APIs and applications that are out there, and they are certainly willing to participate in that, as long as they can get the hits, and, therefore, get the advertising revenue that’s driving the model.

[Google] is in the API business and they are in the services business. When you're in for a penny, you're in for a pound. ... You start providing access to services, and rudimentary on-demand governance systems to account for the services and test for rogue services, and all those sorts of things. Then you ultimately get into semantics, security, and lots of other different areas they probably didn’t anticipate that they'd get into, but will be pushed into, based on the model they are moving into.

... Perhaps Google or others need to come into the market with a gateway appliance that would allow for policy, privilege, and governance. This would allow certain information from inside the organization that has been indexed in an appliance, say from Google, to then be accessed outside. Who is going to be in the best position to manage that gateway of content on a finely-grained basis? Google.
Listen to the entire podcast, or read the full transcript for more IT analysis and SOA insights. Produced as a courtesy of Interarbor Solutions: analysis, consulting and rich new-media content production.

Sunday, July 22, 2007

The macro economics side of new marketing efficiencies

The Wall Street Journal on July 2 contained an excellent story about how economists view U.S. businesses' prospects for the coming months. The story explains how businesses are facing higher energy, food, and labor costs while needing to hire more workers -- all of which mutes general productivity.

The story says:
If they can't pull off a resurgence in productivity, businesses face a tough choice: Raise prices or live with reduced profit margins. Judging from their outlook for corporate profits, the forecasters believe that many ... will choose to split the difference.
That means raising their prices some and living with lower profits, too. This is quite different from the most previous business climate where consistently rising productivity and tame inflation allowed for ongoing record profits.

However, there is another aspect to this somewhat bleak assessment. The next business cycle will demand that many companies focus on efficient top-line growth so that they can maintain profits, even as productivity slackens.

Many businesses may be constrained in how they can grow revenues, perhaps because they only supply a static region or supply a shrinking customer base. Most businesses, however, can find new ways to sell their goods and services in more places -- especially by better use of the Internet. That's because the Internet is and will continue to be a marketer's most powerful tool.

If the past business climate was about productivity as a means to profits, and the Internet was a benefit, then the next business cycle is about revenue growth and finding new markets as the means to fiscal health. And that means that the Internet becomes indispensable. The better you know how to leverage the Internet for your business the better off you will be as an individual -- for your current employer or the next.

Interestingly, the need to find efficient ways to increase the market for goods and services comes just as advertising -- a traditional way to grow revenues -- is in transition. Many businesses are re-evaluating how they advertise, spurred on by Google, viral marketing on the Web, and better use of community outreach and online communication with customers, partners, and prospects.

In observing IT vendors in how they reach markets in novel ways, I now see four major thrusts (and a further diminishing role for traditional advertising). The four major go-to-market avenues are:
  • Traditional inside efforts. This means creating a compelling web site, a great sales force (inside, outside, direct and channel), strong ecommerce applications, downloads and other online distribution means, and super customer support. This will not change.
  • Traditional outside efforts. This means advertising through new and old media, marketing promotions and events, email and direct marketing, and PR/AR/IR. This section may well see resources shifted to the two newer categories ...
  • Viral. This means creating content and conversation, blogs/podcasts/videocasts, or reacting to content and conversation, such that online awareness is understanding is generated about your goods, services, and image, via the social networking effects on the Web. This is and should continue to grow significantly, probably funded by the previous ad budgets.
  • Search. Through all of a business's efforts, they should focus on making their values and knowledge easily accessed via Web, and more discretely searched through the keywords and phrases that best bind it to their users and communities. This will be the more effective way to grow the top line revenues for many companies for some time. Again, look for traditional ad budgets to fuel this arena, too.
So if you are associated with a business they sees the landscape as the country's leading economists do, and you recognize you must grow both current and new markets -- to spur more revenue and business volume -- and that you must do it efficiently, do yourself a favor. Right now pretend that you are a customer or prospect of what you sell.

Now, go to Google or another major web search engine. Search on some terms that might come to your mind as you begin a research or informational journey on what your business supplies. Focus on the problem that your online prospect will have, and the solution you bring to them. Does a search on one lead to the other? Does a question about to fix what you fix actually point to how to evaluate and/or acquire that fix? Right now, online?

It should. What you should see there in the top search results on the left, or organic side, are the fruits of all your marketing efforts:
  • Your website, your product and service descriptions, pricing and how you beat the competition in value, the means to contact a sales rep, a click-through to purchase option, or more direct help.
  • The freely available trustworthy informational assets on the problem-solution set that defines your business value, including conversations, media write-ups, third-party endorsements, interviews, and blogs ... anything that your communities generate about you.
This is how new revenues and market opportunities will be born most productively. When the going gets tough, a business's online marketing gets going.

Like many, you will also buy search-based advertising, based on those essential keywords, that create more ways for those seeking you out as a business to reach your website, product information, sales and support, and solutions. But when you want the most bang for the fewer bucks, the organic results pay best and longest. Invest in them now.

What's also interesting is that the investment, and -- more importantly -- the return on the investment you and your clients make in organic search results will remain strong in nearly any macro business environment. That's right, whether the business cycle is in profit growth mode, revenue growth mode, recession, depression, boom times or flat -- your best ticket to keeping the accountants happy is bringing in leads and sales organically, via search-inspired research and inquiry.

So my prognosis is that the ways in which the Internet can assist companies have evolved well during the past 10 years, but the coming business climate -- no matter what climate it is -- is where Internet marketing will be needed the most. And the future, no matter what the world economy is up to, will also deliver the most value and power through the reach of the World Wide Web.

Saturday, July 21, 2007

Optaros opens OS projects catalog, JBoss Rules 4.0 debuts

With more than 140,000 open-source projects floating around, finding the right one for your company is a daunting task. For IT managers, who usually don't have the vantage point that developers enjoy, the search can also be fraught with danger. Choosing or backing the wrong project can have far-reaching operational, legal and financial consequences.

Now, Opatros Inc. has a helping hand. They have morphed their popular Open Source Catalogue into an online version that provides a searchable and interactive source for finding the open-source projects that best meet a company's business and technology needs.

The Optaros Enterprise Open Source Directory (EOS) eliminates the need to download and print the hefty catalogue that Optaros introduced in November 2006. That version was downloaded over 10,000 times, however, indicating an industry need for this information.

The online directory gives viewers a short description of the project, includes an Optaros rating -- one to five stars -- as well as a community rating, comments from community members, and a link to the project Web site.

The ferment in the open-source community is one of the driving forces behind the move to an online, interactive product, according to Bruno von Rotz, Optaros's EOS Directory executive sponsor.

Von Rotz explained, "Since the launch of the Open Source Catalogue six months ago, some of the leaders have further expanded their competitive positioning. Additionally, the ratings of more than 70 percent of all the projects listed have changed, and we have added and removed approximately 45 enterprise-ready projects. The open source community is developing so rapidly that an online resource is the best approach to capture changes and stay relevant."

In other open-source news, business rules are moving into the mainstream with the release of JBoss Rules 4.0, which promises faster and leaner performance, a more powerful scripting language, point-and-click rules editing, and -- most important -- access to non-programming IT workers. JBoss Rules is the RedHat product that combines the Drools project with a JBoss subscription.

JBoss blogger Pierre Fricke says the new release "lays the foundation for bringing rules-based solutions into Simple, Open and Affordable SOA deployments. . ." Would that be SOASOA or just SOA-squared?

Tony Baer at Computer Business Review sees the new offering as a move by JBoss to position its new version "as a lightweight, more accessible alternative to spending tens of hundreds of thousands of dollars on more complex alternatives."

Some of the benefits of Rules 4.0 include:
  • More expressive and powerful declarative business action scripting language
  • New Guided Rules Editor with point-and-click functionality
  • Visual modeling technology to declaratively model execution paths of related rules
  • Multi-application support (for stateful and stateless processing)
  • Hibernate readiness, and
  • Business Rules Management System (Technology Preview), a web-based, AJAX-enhanced, collaborative rule authoring, versioning and management system to help non-programming IT workers interactively author and/or modify rules that are then automatically versioned.
In other JBoss community news, ICEsoft Technologies has announced what it's promoting as a "new and improved" version of ICEfaces, the company's flagship Ajax development environment.

Version 1.6.0 now offers deeper integration with JBoss Seam, the Web 2.0 application framework.

According to ICEsoft, ICEfaces extends JavaServer Faces (JSF) and eliminates the need for low-level JavaScript development. In the new release, SeamGen enhancements support the rapid generation of functional Seam + ICEfaces applications. It also contains source code examples.

An earlier e-newsletter from ICEsoft reported that the new version contained more than 180 bug fixes, as well as several enhancements.

ICEsoft claims a developer base of 12,000 and says ICEfaces has been downloaded more than 150,000 times. The latest version can be downloaded from icefaces.org.

Wednesday, July 11, 2007

Parsing search marketing, the 'content pyramid' and RSS strategies with Sam Whitmore

Read a full transcript of the podcast. Listen to the podcast.

In my work covering enterprise application development and deployment strategies, I often find myself also witnessing a sea-change in how software providers market their values. Software has always been a challenge to market, and many of the most innovative thinking in online marketing has come from the software industry.

I'm now seeing four distinct legs of support under the software marketing bench: 1) traditional internal marketing (web sites, downloads, product literature), 2) traditional external marketing (advertising, events, webinars, lists, email, newsletters), 3) viral (blogs, podcasts, videocasts, community sites, social media), and 4) search (all of the above plus tagging, sharing, community, relevance).

I'm also seeing a hastening shift from the second leg to the third and fourth, in terms of investment and expected return. Companies are shifting the emphasis from tradition to social media.

Creating and distributing good content is essential to all these activities, and accelerates the movement to social networking and community development. I recently had a podcast conversation with Sam Whitmore, editor and proprietor of Sam Whitmore's Media Survey, in which we discuss these themes along with the burgeoning role of RSS, community, conversations, and search.

Together we wonder whether the "public" relations community will soon gain a new cohort, the "search" relations person. It's a new way to reach the public, the right public, and on the public's terms. Their search terms. Search is the new media.

Here are some excerpts:
We're now getting people to understand the concept of "You don’t have to browse anymore." They still search, of course, probably more than ever before. But you think about the two ways ... that people get their information now, it's either through RSS syndication, or through search. And it’s almost quaint to think back about, "Yeah, I think I am going to go through my bookmarks and see what I haven’t visited in a while." I don’t know anybody who does that anymore.

The idea is to start thinking strategically about your content. Instead of having thousands of people around your company, each creating their own content without much interaction about it, without much coordination about it -- but perhaps a lot of overlap and a lack of reuse -- adding to more of a case of redundancy. And that goes for everything from mimeographs to RSS feeds, and all in between.

But when you think about content more strategically -- and can plan for and create core content that they can be reused and extended across different uses, like marketing literature, the documentation you provide for your services and products, your advertising, as well as your communications with your investors, with analysts, with press -- you create more of a coordinated core set of messages and documents and content. And we'll be seeing more audio and video increasingly in this mix.

If a company can create this content core and allow people to use it and make it accessible -- in the same way as with the development of software tools and components -- you can better control your costs; you can control better your message because more of your messaging will be in sync, because its all coming off of the same core.

Any company that has a strategic direction that they are taking their business to should say, “What are the keywords that relate to our future? What is the content we can create that will drive recognition from those keywords of our value, specifically as an individual company? And how can we create an ongoing process by which we’re feeding that algorithm machine over and over again to retain that high ranking?"

That to me is marketing 2.0.

I think that these IT trade titles and these people that are being rapidly disintermediated, they need to figure out how to get some of their content to rank well in generic search environments. And that brings us back to SEO and the fact that you can subscribe to RSS search results and these people really are getting hammered.

The way you go about a whitepaper is you do research, you get information and you do interviews -- primary research. And what is an interview? It’s a discussion. Why not just create a great discussion with the experts and put that up, instead of putting it into some sort of a turgid-prose, 80-page tome that people only read the executive summary of?

Why not give the long tail its due and put up a series of five key discussions with the experts you would have interviewed anyway for the whitepaper, and let people either read the transcript or glance at the executive summary of each individual interview or discussion, and then pick and choose? To me that’s just a better way to learn. And it also, by the way, is a lot easier for the experts as well as the authors. So it really is a discussion.
Read a full transcript of the podcast. Listen to the podcast.

If any of our listeners are interested in learning more about BriefingsDirect B2B informational podcasts or to become a sponsor of this or other B2B podcasts, please fill free to contact Interarbor Solutions at 603-528-2435.

Produced as a courtesy of Interarbor Solutions: analysis, consulting and rich new-media content production. Dana Gardner’s Podcast on Marketing 2.0 with Sam Whitmore. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.