Monday, June 9, 2008

Serena's Mashup Composer ushers content and widgets to on-demand business mashups

Acting as a mashup matchmaker, Serena Software is bringing together content -- widgets, RSS feeds, and Flash components -- with enterprise data for on-demand business mashups, giving non-technical users access to powerful customized applications without burdening IT departments.

On Tuesday, June 10, Serena will announce the upcoming major iteration of the Redwood City, Calif. company's Mashup Composer service, which allows users to drag and drop a wide variety of consumer information and combine it with data from internal applications -- such as in salesforce.com, Siebel, and Oracle -- to create rich Internet mashups (RIMs).

Users will be able to leverage any kind of widget or rich Internet application including Adobe Flash, Amazon search, Flickr, Microsoft Silverlight, RSS feeds, YouTube, any of the 30,000 Google gadgets, LinkedIn or Facebook profiles, or external newsfeeds. That's a lot of stuff, and there will soon be even more, especially the fruits of the fast-charging social networking.

Serena explains how this works:

"Imagine a scenario where a sales rep is preparing for a big meeting with a new customer. The rep might start with the customer’s record in salesforce.com, and have the mashup fetch related information like a photo and details from the customer’s Linked In or Facebook profile, external news feeds showing the company’s latest stock price, credit report information from a Dunn & Bradstreet Web service, and widgets showing local weather and traffic in the customer’s location. Soon the rep has all the information needed for the meeting. It’s as easy as personalizing a Yahoo! home page."

While some IT folks may worry that putting this functionality in the hands of non-technical people, Serena says they have that worry covered, saying that they povide a "proven governance framework that provides the reliability, security, and compliance that IT requires."

I wrote about this issue last August when I blogged on Serena and what was then its upcoming "Project Vail:"

"The trick is how to allow non-developers to mashup business services and processes, but also make such activities ultimately okay with IT. Can there be a rogue services development and deployment ecology inside enterprises that IT can live with? How can we ignite 'innovation without permission' but not burn the house down?

"Serena believes they can define and maintain such balances, and offer business process mashups via purely visual tools either on-premises or in the cloud."

The new functionality in the Mashup Composer will be available free of charge as part of Serena's on-demand release in the third quarter. Word has it that the tool will be free, and that pricing will follow the cloud model, based on infrastructure use over time.

The Serena model augers well for my earlier comments on the power and need for WOA. Again, I'm not locked into the WOA nomenclature, but the goal of spurring on SOA use and methods via energizing users with Web content remains.

Serena defines its Mashup Composer process one that enables "business mashups." I like the imagery that connotes. I'd take it a step further and join it with my WOA value comments, such that business mashups are a catalyst to broader SOA use and adoption while also extending SOA value into the managed cloud.

Consider the power of combining and leveraging the best of SOA, the best of on-demand business mashups, and the powerful insights on users and their communities as defined by the social graph information now available from the social networks.

Effectively bringing together business assets, open web content and defined social relations will offer something quite new and very productive over the next few years. Those companies that jump on this early and master it will develop a broad advantage.

Thursday, June 5, 2008

Apache CXF: What the future holds for Web services frameworks and dynamic languages

Listen to the podcast. Read a full transcript. Sponsor: IONA Technologies.

More open source server components and frameworks continue to emerge from developer communities. One of the latest, Apache CXF, an open-source Web services framework, graduated from incubation recently to become a full Apache Foundation project.

The progeny of the previous merger of the ObjectWeb-managed Celtix project and the XFire Project at Codehaus, CXF joins a growing pool of Apache and other open source projects supporting services oriented architect (SOA) infrastructure. Many, like CXF, also enjoy commercial support and associated commercial products, such as IONA Technologies' FUSE.

CXF is on the cusp of broadening beyond conventional Web services, however, as users seek to align the framework with JavaScript, and perhaps more dynamic programming languages, such as Groovy and Ruby. Interoperability is the goal, with both backward and forward messaging compatibility, with an expanding set of technologies supported. Community-based open source development is adept at adding such breadth and depth to the benefit of all users, and CXF is no exception.

To learn more about CXF and the direction for SOA, middleware and open source development, I recently spoke with Dan Kulp, a principal engineer at IONA who has been deeply involved with CXF; Raven Zachary, the open-source research director at The 451 Group, and Benson Margulies, the CTO of Basis Technology.

Here are some excerpts:
If you are doing any type of SOA stuff, you really need some sort of Web-service stack. There are applications written for ServiceMix and JBI that don't do any type of SOAP calls or anything like that, but those are becoming fewer and farther between. Part of what our Web services bring is the ability to go outside of your little container and talk to other services that are available, or even within your company or maybe with a business partner or something like that.

The whole Apache model is mix and match, when you are talking about not only a licensing scheme. The Apache license, is a little easier for commercial vendors to digest, modify, and add in, compared to the GPL, but also I think it's the inherent nature of the underlying infrastructure technologies.

When you deploy an application, especially using open source, it tends to be several dozen distinct components that are being deployed. This is especially true in Java apps, where you have a lot of components or frameworks that are bundled into an application. So, you would certainly see CXF being deployed alongside of other technologies to make that work. Things like ServiceMix or Camel, as you mentioned, ActiveMQ, Tomcat, certainly Apache Web Server, these sorts of technologies, are the instrument to which these services are exposed.

A lot of these projects, like Camel and ServiceMix, require some sort of Web-services stack, and they've basically come to CXF as a very easy-to-use and very embeddable service stack that they are using to meet their Web-services needs. ... One of CXF's advantages is what you want to do is deliver to some third-party a stack that they put up containing your stuff that interacts with all of their existing stuff in a nice light-weight fashion. CXF is non-intrusive in that regard.

CXF gets a lot of attention because it is a full open-source framework, which is completely committed to standards. It gives easy-to-use, relatively speaking, support for them and, as in many other areas, focuses on what the people in the outside world seem to want to use the kit for -- as opposed to some particular theoretical idea ... about what to use it for.

Apache CXF has is a fairly different approach of making the code-first aspect primary. ... So, a lot of these more junior-level developers can pick up and start working with Web services very quickly and very easily, without having to learn a lot of these more technical details.

It's actually kind of fascinating, and one of the neatest things about working in an open-source project is seeing where it pops up. ... One of the examples of that is Groovy Web service. Groovy is another dynamic language built in Java that allows you to do dynamic things. I'm not a big Groovy user, but they actually had some requirements to be able to use Groovy to talk to some Web services, and they immediately started working with CXF.

They liked what they saw, and they hit a few bugs, which was expected, but they contributed back to CXF community. I kept getting bug reports from people, but was wondering what they were doing. It turns out that Groovy's Web-services stack is now based on CXF. That's type of thing is very fascinating from my standpoint, just to see that that type of stuff developed.

Apache CXF 2.1 was released about a week after we graduated, and it brought forth a whole bunch of new functionality. The JavaScript was one of them. A whole new tooling was another thing, also CORBA binding, and there is a whole bunch of new stuff, some REST-based APIs. So, 2.1 was a major step forward.

Now that they we're graduated, there are a lot more people looking at it, which is good. We're getting a lot more input from users. There are a lot of people submitting other ideas. So, there is a certain track of people just trying to get some bug fixes and getting some support in place for those other people.

I like the fact that in CXF they are looking at a variety of protocols. It's not just one implementation of Web services. There's SOAP, REST, CORBA, other technologies, and then a number of transports, not just HTTP. The fact is that when you talk to enterprises, there's not a one-size-fits-all implementation for Web services. You need to really look at services, exposing them through a variety of technologies.

I like that approach. It really matches the needs of a larger variety of enterprise organizations and just it's a specific technology implementation of Web services. I mean, that's the approach that you're going to see from open-source projects in the space. The ones that provide the greatest diversity of protocols and transports are going to do quite well.

There's a whole bunch of other ideas that we're working on and fleshing out. The code first stuff that I mentioned earlier, we have a bunch of other ideas about how to make code-first even better.

There are certain tool kits that you kind of have to delve down into either configuration or WSDL documents to accomplish what you want. It would be nice if you could just embed some annotations on your code, or something like that, to accomplish some of that stuff. We're going to be moving some of those ideas forward.

There's also a whole bunch of Web-services standards such as WS-I and WS-SecureConversation that we don't support today, but we are going to be working on to make sure that they are supported.

When you look at growth opportunities, back in 2001, JBoss app server was a single-digit market share, compared to the leading technologies at the time, WebSphere from IBM and WebLogic from BEA. In the course of four years, that technology went from single-digit market share to actually being the number one deployed Java app server in the market. I think it doesn't take much time for a technology like CXF to capture the market opportunity.

So, watch this space. I think this technology and other technologies like it, have a very bright future.
Listen to the podcast. Read a full transcript. Sponsor: IONA Technologies.

Wednesday, June 4, 2008

JustSystems moves dynamic document management deeper into enterprise

Structured authoring -- it's not just for technical documents any more. JustSystems today announced XMetaL for Enterprise Content Management (ECM), which integrates with more than 20 commercial repositories and file systems.

This new offering provides seamless integration to all leading content management systems, including repositories from IBM FileNet, EMC Documentum, OpenText, Interwoven, and Microsoft. [Disclosure: JustSystems is a sponsor of BriefingsDirect podcasts.]

JustSystems has also announced an original equipment manufacturer (OEM) agreement with IBM, under which the company will embed and resell IBM WebSphere Information Integrator Content Edition (IICE) with the new XMetaL product. This is designed to allow companies to broaden XMetaL deployments and to leverage repositories they're currently using to store and manage content.

According to JustSystems, XMetaL for ECM will allow companies to start using structured authoring, no matter which repositories are already in place. Companies will also be able to deploy it across departments without disrupting current content management, as well as integrate and automate content creation and publishing across repositories.

Structured documents can be a valuable ally of service-oriented architecture (SOA) by providing data to workers in the document formats to which they are accustomed, and, at the same time, allowing them to focus on authoritative data and content, while eliminating the drudgery of validating and reconciling documents.

I recently wrote a white paper on the role of structured authoring, dynamic documents, and their connection to SOA. Read the whole paper here.

Back in April, I recorded a podcast with Jake Sorofman, senior vice president of marketing and business development, for JustSystems North America. The sponsored podcast described the tactical benefits of recognizing the dynamic nature of documents, while identifying the strategic value of exposing documents and making them accessible through applications and composite services viaSOA.

In the podcast, Sorofman explained the value of structured authoring in the enterprise:

"There are really a couple of different issues at work here. The first is the complexity of a document makes it very difficult to keep it up to date. It’s drawing from many different sources of record, both structured and unstructured, and the problem is that when one of the data elements changes, the whole document needs to be republished. You simply can’t keep it up-to-date.

"This notion of the dynamic documents ensures that what you’re presenting is always an authoritative reflection of the latest version of the truth within the enterprise. You never run the risk of introducing inaccurate, out of date, or stale information to field base personnel."

You can listen to the podcast here or read a full transcript here.

Tuesday, June 3, 2008

Spike in enterprise 'events' spurs debut of Event Processing Technical Society

The recent growth -- and expected spike -- in business event data in enterprises has led a group of IT industry leaders to form the Event Processing Technical Society (EPTS), designed to encourage adoption and effective use of event processing methods and technology in applications.

Among the founding members are such heavy hitters as IBM, Oracle, TIBCO Software, Inc., Gartner Research, Coral8 Inc., Progress Software, and StreamBase.

Event processing pioneer Dr. David Luckham, a founding member of EPTS, explained in a press release:

“We've had decades of development of event processing technology for simulation systems, networking, and operations management. Now, the explosion in the amount of business event data being generated in modern enterprises demands a new event processing technology foundation for business intelligence and enterprise management applications.”

EPTS has five initial goals:
  • Document usage scenarios where event processing brings business benefit

  • Develop a common event-processing glossary for its members and the community-at-large to use when dealing with event processing

  • Accelerate the development and dissemination of best practices for event processing

  • Encourage academic research to help establish event processing as a research discipline and encourage the funding of applied research

  • Work with existing standards development organizations such as Object Management Group (OMG), OASIS and W3C to assist in developing standards in the areas of: event formats, event processing interoperability, event processing (meta) modeling and (meta) languages.
EPTS, which does not plan to develop standards itself, has already begun work on an initial draft of the proposed glossary. A use-case work group is generating templates around documentation and presentation of the use cases.

Event processing was a hot topic at the recent TIBCO user conference, TUCON. (Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.)

Fellow ZDNet blogger Joe McKendrick has some thoughts on event processing, too.

The new consortium plans three additional work groups. The first will focus on developing information on event processing architecture. Another will identify requirements for the interoperability among event processing applications and platforms. The third will collaborate with the academic community to develop courses in this area.

The advance in the scale and complexity of streams of events will place a greater burden on infrastructure and architects. But the ability to manage and harvest analysis from these events could be extremely powerful, and provide a lasting differentiator for expert practitioners.

While the processing of such events has its roots in financial companies and transactions, the engine for dealing with such throughputs and variable paths will find uses in many places. The vaulting commerce expected as always-on mobile Web, GPS location and social graph data collide is a prime example.

We hit on these types of transactions as the progeny of online advertising in a recent BriefingsDirect Analyst Insights roundtable podcast.

Consumers and end users should begin to enjoy what they may well perceive as "intelligent" services -- based on the fruits of complex events processing -- from their devices and providers. Harvesting and using more data from sensors and device meshes will also require the scale that event processing requires.

We should also chalk this up to yet another facet of the growing definition of cloud computing, as event processing as a service within a larger set of cloud-based services will also build out in the coming years. The whole trend of event processing bears close monitoring.

EPTS will hold its next meeting Sept, 17-19 in Stamford, Conn. More information on the consortium can be found at the EPTS Web site.

Friday, May 23, 2008

Microsoft opens philosophical can of worms with Live Search Cashback

Talk is bubbling up across the blogosphere, Gillmor Gang and Techmeme daily about social graph personal information. This may be among the most important discussions and topics of our time. How the "social mesh" works out now will affect our lives and businesses for a long time. It may even impact how we define what "me" is online. We really need to get it right, ASAP.

Yet much of the talk focuses on technology, privacy, use rights and still loosely defined standard approaches to protecting user control over data. It's still murky about how the online social network services will own and control the user- and relationships-defining data inside of their social networks, including Twitter. But there's a larger set of issues that has to do with how we want technology and the Internet to affect us people, as a business, as a society, as a market of markets and as a species.

UPDATE: Many of these issues came up, especially toward the end, of Friday's Gillmor Gang with Google Director of Engineering David Glazer. One takeaway is that, ironically, Microsoft should be among Google Friend Connect's best friends.

The discussion on social graph data portability gets to a philosophical level quickly, because the ways we have codified our personal relationships to each other -- and to larger organizations or power centers -- over eons does not necessarily apply adequately to the new virtual boundaries. It's hard to know on the Web what defines the rights of the individual, the family, tribe, community, company, village, town, state, nation, civilization, race, or species. Do accepted and proven cultural patters offline fully translate into social patterns online?

The older established "contracts" -- from Codex Hammurabi to Magna Carta to Mayflower Compact to U.S. Constitution to the User Terms of Agreement -- do not seem to get the job fully done anymore. It's not clear what I am entitled to online, whereas I'm pretty sure I know what I'm entitled to offline, and I know what to do to enforce getting what I'm entitled to offline legally, ethically and politically.

In essence, we as online users and small businesses don't have any social-order contracts with the online providers, other than what their lawyers put in the small print when you "accept" their free or paid services. And, of course, they have made available their privacy policies for all to see. So there. Click away, users galore, while they store away the user data and relationships analytics.

As a person, you only retain the right not to click (as long as you pay throughout the two-year user subscription agreement, or suffer the penalty charge for leaving). If you're lucky you'll be able to take your phone number with you if you walk, but not necessarily your email address, or your contacts, your social interactions definitions. Most of the data about whatever you did while nestled in the rosy social bosom of their servers, remains with them unless the volunteer to let it be open. So far.

Without belaboring the implications on the metaphysical scale, my point is to show that how our online social interactions as currently defined and controlled place us into uncharted territory. And as with any social contracts, the implicit and explicit ramifications of where we find ourselves later on needs to taken very seriously.

We'll want the ability to back out, if the unforeseen future warrants it, without too much pain, with our open data in tact. We should all want escape clauses for what we do online the next several years, just to be safe. Who you gonna call if it's not fair?

If things don't go well for the user or individual business, what could be done? Because this is about the Web, there isn't a government to lobby, a religious doctrine to fall back on, a meta data justice code of conduct, nor an established global authority to take directives from. The older forms of social contract enforcement don't have a clue. There is only the User Terms of Agreement, the codex of our time. Read it and weep.

Because this is about the Web, the early adopters basically make it up as they go and hope for the best. It's been a great ride. The service providers try and keep up with the fast-changing use patterns, and then figure out a business model that has legs. They write up more User Terms of Agreement. Startups get funded based on their ability to get some skin in the game, even without a business model. They show the investors the User Terms of Agreement, and get their rounds. More work goes into the User Agreements than into the infrastructure to keep the thing working once the clicks come.

This laissez-faire attitude has worked pretty darn well for building out the Web as an industry, thankfully. But now we're talking about more than building out the no-holds-barred Web, we're talking about social contracts ... We're talking about what the user possesses from their role in building out the Web, in populating the social networks, the authoring of the blogosphere. Is there any social collective ownership or rights by the participants in the Web? Or is it only really -- in the final analysis -- owned those who control the means of production of the services?

There's the Web, and there's the blogosphere -- are they they same? What rights does the individual, the person, the blog entity have on the commercial Web? Does the offline me possess the same social powers online? I really don't know.

What's clear is that people like Mike Arrington, Marc Cantor, Steve Gillmor, Robert Scoble and Dave Winer (among many others) want as much freedom about what they do online as what Western Civilization has endowed on them and their ancestors offline. In some circles, and some of these people, want even more social power online than what has been the norm offline. More power to them.

There is a power clash a brewin'. The U.S. has long struggled over states rights versus federal rights. The individual has looked to both -- and pitted them against each other -- to define and protect individual rights.

But what about online? When push comes to shove, how does the individual rights assert themselves against what the services provider can perfectly legally assert? If the server farm says they own your online address book, they probably do legally (see the Use Terms). If they say they own the meta data from your click stream on their servers over the past three years, they probably do.

So far, user rights have been strictly voluntary on behalf of the providers. Some are built into agreements. The needed rising tide of online adoption patterns and essential need to generate traffic and clicks has protected users, to a point. Let's hope it continues. I hope voluntary is enough.

Folks, you should recognize that you already have a lot of power, given the fact that social networks are falling all over themselves to show how "open" they are. They fear that you can and will bolt, even if you lose some data (the first time). Data portability is recognized by the Googles and Microsofts as hugely important, shouldn't it be huge to all of us, too?

Because as we move to always-on social interactions across all we do on the Web, what we do socially online may begin to outweigh what we do socially offline. For some of us this is already true. What distinguishes us as online or offline is blurred, and I believe will grow more so and any difference will become irrelevant.

I am social, therefore I am social. It will not matter how or where. Yet online, the fabric of control over my social universe is more under the influence of the User Terms of Agreement than anything else. Will I lose any part at all of the personal freedoms won by my ancestors when I move my social activities online?

What defines any person by what they do online -- is this a business agreement based on User Terms of Agreement or something more defined by centuries-old social contracts and mores. Does freedom trump user agreements?

When would a concept like human freedom trump any user agreement, even if it is well documented in Delaware courts? Am I free to take my social graph data, that which defines me as me, with me anywhere online because it's an inalienable right? If so, I should not need any OpenSocial standards. It's self-frickin-evident! I should not need it in the User Terms of Agreement because it's long established as precedent.

But here's the rub that came to the surface this week when Microsoft crossed the Rubicon in the Web world with Live Search Cashback.

If users can and will assert that their social graph information is theirs by virtue of their culturally endowed freedom as a human, then what about their "commerce graph?" Who you are but what you buy is not too much different as who you are buy whom you associate with. Is commerce social, or is being social commerce?

My social graph contains my person meta data and my index of contacts, their context to me, and what actually defines me as a social creature. My commerce graph exists too, it's on Amazon, Walmart.com, and dozens of other vendors that know me by how I shop, learn, peruse, compare and perhaps buy. If I search as part of the shopping process then my commerce graph is on Google, Yahoo! and Microsoft (mostly on Google). I do commerce through my social activities, and I may want a social network with those I buy from and sell to.

All this user intentions and activities information is related and should not be separated. I should be able to mix and match my data regardless of the server. I reached those servers through my own device and browser, I made those clicks and punched those keys on my machine before they showed up on someone else's. I own my actions as a free human.

Microsoft is now finding ways to build out a business model via Live Search Cashback (with more to come no doubt) that takes your commerce graph and in essence, sells or barters it to the sellers of goods and services. I'm not saying this is in any way bad, or unproductive. It seems a logical outcome of all that has preceded it online. I expect others to follow suit.

But it does have me wondering. Who owns my commerce graph? Isn't it connected to my social graph? And if Microsoft can make money off of it, why can't I? Can I only make money off of my commerce graph when I use only a certain providers' services and only through its partners? If so, then it's not really my commerce graph. I'm only as free as the User Terms of Agreement say.

If my social graph is mine, and I can move and use it freely, then I surely will want the same to be true for my commerce graph (or any other user pattern graph). This is an essential unalienable right, but I think I want it in writing.

So, please, in order for any of us progeny of Western Civilization to use any of these burgeoning online services, can we have all of this freedom business spelled out clearly in the User Terms of Agreement?

Let's make it the first line item for all online agreements from now on: "Dear User, You are a human and you are free and so that also pertains to everything you do on our Web sites and services."

Until we have technical standards or neutral agencies to route and offer our control over our own use data, then we should all insist on better User Terms of Agreement, those that spell out the obvious. We are free, our data is ours, we should be able to control it.

Wednesday, May 21, 2008

ZoomInfo spins off 'bizographic' platform for controlled circulation online advertising play

Business information provider ZoomInfo has spun off its advertising business units in a new company, Bizo, offering a targeted B2B advertising platform, or what it calls "bizographic" advertising.

Privately held and venture-backed ZoomInfo, Waltham, Mass., announced a new set of business segments last fall, but has now taken the additional step of spinning the unit out. Former general manager and senior vice president Russell Glass will serve as CEO of the new company, which is expected to launch later this year. [Disclosure: ZoomInfo has been a sponsor of some BriefingsDirect B2B podcasts and videocasts that I have produced.]

Bizographic advertising, as ZoomInfo explains it, provides highly targeted demographic and behavioral advertising, allowing marketers to target their online advertising based on the audience of a site instead of the content.

For example, if a company wants to reach technology decision makers for an IT product offering or high-income individuals for a platinum credit card offer, it could use bizographic advertising to target directors of IT or CEOs respectively.

The field has heated up recently as CBS intends to acquire CNET (parent company of this blog's host, ZDNet) and it's BNET division, which also slices and dices audiences by work and functional definitions for the benefit of advertising targeting. Could Bizo also be on the block?

According to ZoomInfo officials, Bizo will continue to leverage the company’s understanding of business people and companies to allow marketers to target business users based on thousands of segmenting possibilities, including combinations of title, company, industry, functional area, company size, education, location, etc. The company expects over 20 million targetable business users in its network, when it launches.

Bryan Burdick, ZoomInfo's president explained the move:

"While B2B advertising is complimentary to ZoomInfo’s business, the market has been starved for the ability to target business professionals online. Creating a new business in order to meet that need was an ideal solution for us."

I gave my readers a head's up on what I called "controlled circulation advertising" last December, referring specifically to ZoomInfo:

ZoomInfo is but scratching the surface of what can be an auspicious third (but robust) leg on the B2B web knowledge access stool. By satisfying both seekers and providers of B2B information on business needs, ZoomInfo can generate web page real estate that is sold at the high premiums we used to see in the magazine controlled circulation days. Occupational-based searches for goods, information, insights and ongoing buying activities is creating the new B2B controlled circulation model.

ZoomInfo, a business information search engine, finds information about industries, companies, people, products and services. The company’s semantic search engine continually crawls millions of company Websites, news feeds and other online sources to identify company and people information, which is then organized into profiles.

ZoomInfo currently has profiles on nearly 40 million people and over 4 million companies, and its search engine adds more than 20,000 new profiles every day.

Splunk goes virtual, unveils broad IT search capabilities for Citrix XenServer

Splunk, which provides indexing and search technology for IT infrastructures, this week made its move into the virtual realm with the announcement of Splunk for Citrix XenServer Management.

The San Francisco company says this is just its first foray into search support services for virtualization and that it will release similar applications for each of the leading server virtualization platforms in the near future. [Disclosure: Splunk is a sponsor of BriefingsDirect podcasts.]

The Splunk announcement comes during a Citrix cavalcade of news and developments, including the expected delivery of its desktop as a service portfolio.

While server virtualization provides significant efficiency and utilization improvement benefits to datacenters, it also brings complexity in troubleshooting glitches. Performance and capacity issues can arise when applications share the same physical host. With multiple virtual machines (VMs) sharing a pool of server, storage and network resources, changes to any one layer or VM could potentially affect others – and the applications they contain. Root cause analysis is even more of a challenge when instances of virtualized containers and runtimes pop in and out of use via dynamic provisioning.

Splunk indexing and search approach aims to provide a full view of IT-generated use data, not only from the hypervisor and VM, but from the server, guest operating system, applications, and the network. Splunk’s technology indexes data across all tiers of the infrastructure in near real-time. This allows operators and administrators to maintain a large, dynamic IT environment with fewer people, with higher automation and easier service performance management.

Splunk for Server Virtualization Management supports virtualization planning, workload optimization, performance monitoring, root cause analysis and log management, says the company.

The new product is available immediately. Users can download a free 30-day trial from the company's Web site.

Splunk has been in the news lately, and on Monday announced that communications provider BT has agreed to license Splunk's IT search platform technology to build a managed-security product that will allow customers to preserve 100 percent of the logs on a network.

Three weeks ago, the company unveiled Splunk for Change Management, an application to audit and detect configuration and changes, and Splunk for Windows, which indexes all data generated by Windows servers and applications.

Tuesday, May 20, 2008

IBM executive defines next generation of enterprise datacenters through cloud computing

IBM Vice President for Enterprise Systems Rich Lechner took the stage at the Forrester Research IT Forum on Tuesday to explore the definition of new enterprise datacenters that will enable new levels of business innovation.

Factors buffeting the definition of the new class of datacenterinclude globalization, a rising tide of information and need for expanded flexibility and adaptability for business models.

To compete, companies need to operate without boarders, and bcome a globally integrated enterprise. "There are huge resource pools emergig around the world ... with new ideas and creativity," said Lechner. It's more than outsourcing, he said, it's about integrating these resources.

The tide of data and devices, of resources, and assets will continue to explode. How can you best use the data that flows all around you?

New business models will evolve, said Lachner. The impact of social networking and peer influences on buying decisions are just beginning to be felt.

Virtualization will remake the landscape IT, as will cloud computing, virtual worlds, and high new levels of scaling when it comes to compute power, said Lechner.

Cloud computing allows an unbounded aspiration of the best user experiences. "It provides anytime, anywhere access to IT resources deliver dynamically as a service," he said. Cloud computing expands capacity almost indefinitely.

IBM's cloud initiatives are allowing technology incubation, data-intense workloads, government-led initiatives and new types of software development support.

IT plus cloud computing can enable change. How to get started? Simplify using virtualization, share infrastructures via SOA, and create a dynamic ability to access data and knowledge, said Lechner.

The world is changing to enterprises without borders, unbounded IT infrastructure, and huge more data sets, and a need for collaboration that increasingly crosses many organizational and sourcing types, he said.

Additionally IBM is learning a lot from Google and vice versa when it comes to cloud computing, said Lechner. Cloud computing allows its practitioners to isolate compute units and make their use far more efficient economically via dynamic provisioning.

For data security, users can physically isolate data using partitioning. IBM for years has been hosting multiple companies on single mainframes with no data protection or privacy issues. The technology exists to leverage the economics of cloud computing while protecting data, said Lechner.

"It's about removing IT has an inhibitor," he said.