Monday, August 24, 2009

IT and log search as SaaS gains operators fast, affordable and deep access to system behaviors

Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Learn more. Sponsor: Paglo.

Automatically discover your IT data and make it accessible and useful. Get started for free.

Complexity of data centers escalates. Managed service providers face daunting performance obligations. And the budget to support the operations of these critical endeavors suffers downward pressure.

In this podcast, we explore how IT search and systems log management as a service provides low-cost IT analytics that harness complexity to improve performance at radically reduced costs. We'll examine how network management, systems analytics, and log search come together, so that IT operators can gain easy access to identify and fix problems deep inside complex distributed environments.

Here to help better understand how systems log management and search work together are Dr. Chris Waters, co-founder and chief technology officer at Paglo, and Jignesh Ruparel, system engineer at Infobond, a value-added reseller (VAR). The discussion is moderated by me, BriefingsDirect's Dana Gardner.

Here are some excerpts:
Waters: [Today] there’s just more information flowing, and more information about the IT environment. Search is a great technology for quickly drilling through a lot of noise to get to the exact piece of data that you want, as more and more data flows at you as an IT professional.

One of the other challenges is the distribution of these applications across increasingly distributed companies and applications that are now running out of remote data centers and out of the cloud as well.

When you're trying to monitor applications outside of a data center, you can no longer use software systems that you have installed on your local premises. You have to have something that can reach into that data center. That’s where being able to deliver your IT solution as software-as-a-service (SaaS) or a cloud-based application itself is really important.

You've got this heterogeneity in your IT environments, where you want to bring together solutions from traditional software vendors like Microsoft and cloud providers like Amazon, with their EC2, and it allows you to run things out of the cloud, along with software from open-source providers.

All of the software in these systems and this hardware is generating completely disparate types of information. Being able to pull all that together and use an engine that can suck up all that data in there and help you quickly get to answers is really the only way to be able to have a single system that gives you visibility across every aspect of your IT environment.

And "inventory" here means not just the computers connected to the network, but the structure of the network itself -- the users, the groups that they belong to, and, of course, all of the software and systems that are running on all those machines.

Search allows us to take information from every aspect of IT, from the log files that you have mentioned, but also from information about the structure of the network, the operation of the machines on the network, information about all the users, and every aspect of IT.

We put that into a search index, and then use a familiar paradigm, just as you'd search with Google. You can search in Paglo to find information about the particular error messages, or information about particular machines, or find which machines have certain software installed on them.

We deliver the solution as a SaaS offering. This means that you get to take advantage of our expertise in running our software on our service, and you get to leverage the power of our data centers for the storage and constant monitoring of the IT system itself.

The [open source] Paglo Crawler is a small piece of software that you download and install onto one server in your network. From that one server, the Paglo Crawler then discovers the structure of the rest of the network and all the other computers connected to that network. It logs onto those computers and gathers rich information about the software and operating environment.

That information is then securely sent to the Paglo data center, where it's indexed and stored on the search index. You can then log in to the Paglo service with your Web browser from anywhere in your office, from your iPhone, or from your home and gain visibility into what's happening in real time in the IT environment.

This allows people who are responsible for networks, servers, and workstations to focus on their expertise, which is not maintaining the IT management system, but maintaining those networks, servers, and workstations.

The Crawler needs some access to what’s going on in the network, but any credentials that you provide to the Crawler to log in never leaves the network itself. That’s why we have a piece of software that sits inside the network. So, there are no special firewall holes that need to be opened or compromised in the security with that.

There is another aspect, which is very counterintuitive, and that people don't expect when they think about SaaS. Here at Paglo, we are focused on one thing, which is securely and reliably operating the Paglo service. So, the expertise that we put into those two things is much more focused than you would expect within an IT department, where you are focused on solving many, many different challenges.

Ruparel: For 15 years, we [at Infobond] have been primarily a break-fix organization, moving into managed services, monitoring services. We needed visibility into the networks of the customers we service. For that we needed a tool that would be compatible with the various protocols that are out there to manage the networks -- namely SNMP, WMI, Syslog. We needed to have all of them go into a tool and be able to quickly search for various things.

We found that the technology that Paglo is using is very, very advanced. They aggregate the information and make it very easy for you to search.

You can very quickly create customized dashboards and customized reports based on that data for the end customer, thus providing more of a personal and customized approach to the monitoring for the customers.

Some of the dashboards are a common denominator to various sorts of customers. An example would be a Microsoft Exchange dashboard. Customers would love to have a dashboard that they have on the screen. At the end of the day, I look at it very simply as collecting information in one place, and then being able to extract that easily for various situations and environments.

These are some things that are a common denominator to almost all customers that are moving with the technology, implementing new technologies, such as VMware, the latest Exchange versions, Linux environments for development, and Windows for their end users.

The number of pieces of software and the number of technologies that IT implements is far more than it used to be, and it’s going to get more and more complex as time progresses. With that, you need something like Paglo, where it pulls all the information in one place, and then you can create customized uses for the end customers.

If I go and set things up without Paglo, it would require me to place a server at the customer site. We would have to worry about not only maintenance of the hardware, but the maintenance of the software at the customer site as well, and we would have to do all of this effort.

We would then have to make sure that our systems that those servers communicate to are also maintained and steady 24/7. We would have multiple data centers, where we can get support. In case one data center dies, we have another one that takes over. All of that infrastructure cost would be used as an MSP.

At the end of the day, I look at it very simply as collecting information in one place, and then being able to extract that easily for various situations and environments.

Now, if you were to look at it from a customer's perspective, it's the same situation. You have a software piece that you install on a server. You would probably need a person dedicated for approximately two to three months to get the information into the system and presentable to the point where its useful. With Paglo, I can do that within four hours.

Waters: We have a lot of users who are from small and medium-sized businesses. We also see departments within some very large enterprises, as well, using Paglo, and often that's for managing not just on-premise equipment, but also managing equipment out of their own data centers.

Paglo is ideal for managing data-center environments, because, in that case, the IT people and the hardware are already remote from each other. So, the benefits of SaaS are double there. We also see a lot of MSPs and IT consultants who use Paglo to deliver their own service to their users.

Ruparel: As far as cost is concerned, right now Paglo charges a $1.00 a device. That is unheard of in the industry right now. The cheapest that I have gotten from other vendors, where you would install a big piece of hardware and the software that goes along with it, and the cost associated with that per device is approximately $4-5, and not delivering a central source of information that is accessible from anywhere.

As far as cost, infrastructure cost wise, we save a ton of money. Manpower wise, the number of hours that I have to have engineers working on it, we save tons of time. Number three, after all of that, what I pay to Paglo is still a lot less than it would cost me.
Listen to the podcast. Find it on iTunes/iPod and View a full transcript or download the transcript. Learn more. Sponsor: Paglo.

Automatically discover your IT data and make it accessible and useful. Get started for free.

Sunday, August 23, 2009

ITIL 3 leads way in helping IT transform into mature business units amid the 'reset economy'

Listen to the podcast. Find it on iTunes/iPod and View a full transcript, or download the transcript. Learn more. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at

Running IT departments as mature business units has clearly become more pressing. Recessionary budget pressures and the need to compare existing IT costs to newer options and investments means IT and business leaders need to understand how IT operates from an IT services management (ITSM) perspective.

The "reset economy" has moved the business and operations maturity process of IT from "nice to have" to "must have," if costs are going to be cut without undermining operational integrity. IT financial management (ITFM) must be pervasive and transparent if true costs are to be compared to alternative sourcing options like data center modernization, SaaS, virtualization, and cloud computing models.

Fortunately, there is a template and tried-and-true advise on moving IT operations to such business unit maturity. The standards and methods around ITIL Version 3 provide a pattern for better IT efficiency, operational accountability and ITSM. Yet there are some common misunderstandings about ITIL and how it can be best used.

To help unlock the secrets behind ITIL 3, and to debunk some confusion about ITIL v3, I recently gathered three experts on ITIL for a sponsored podcast discussion on how IT leaders can best leverage ITSM.

Please welcome David Cannon, co-author of the Service Operation Book for the latest version of ITIL, and an ITSM practice principal at HP; Stuart Rance, service management expert at HP, as well as co-author of ITIL Version 3 Glossary; and Ashley Hanna, business development manager at HP and also a co-author of ITIL Version 3 Glossary.

Here are some excerpts of our discussion:

Cannon: IT needs to save costs. In fact, the business puts a lot of pressure on IT to bring their costs down. But, in this economy, what we're seeing is that IT plays a way more important role than simply saving money.

Business has to change the way in which it works. It has to come up with different services. The business has to reduce its cost. It has to find better ways of doing business. It has to consolidate services. In every single one of those decisions, IT is going to play an instrumental role. The way in which the business moves itself toward the current economy has to be supported by the way in which IT works.

Now, if there is no linkage between the business and the way in which IT is managed, then it's going to be really, really difficult for the business to get that kind of value out of IT. So, ITSM provides a way in which IT and the business can communicate and design new ways of doing business in the current economy.

IT is going to drive these changes to the business. What we're seeing in the reset is that businesses have to change their operating models.

Part of an operating model within any business is their IT environments and the way in which IT works and is integrated into the business processes and services. So, when we talk about a reset, what we're really talking about is just a re-gearing of the operating models in the business -- and that includes IT.

Rance: A lot of people don't really get what we're talking about, when we talk about service management.

The point is that there are lots of different service providers out there offering services. Everybody has some kind of competition, whether it's internal, a sort of outsourcing, or alternate ways of providing service.

All of those service providers have access to the same sorts of resources. They can all buy the same servers, network components, and software licenses, and they can all build data centers of the same standards. So, the difference between service providers isn't in what those resources they bring to bear. They are all the same.

Service management is the capabilities a service provider brings in order to deploy, control, and manage those resources to create some value for their customers. It includes things about all of your processes for managing changes, incidents, or the organizational designs and their roles and responsibilities, and lots of different things that you develop over time as an organization, it's how you create value from your resources and distinguish yourself from alternate service providers.

... What I've seen recently is that organizations that have already achieved a level of ITSM maturity are really building on that now to improve their efficiency, their effectiveness, and their cost-effectiveness.

Maybe a year or two years ago, other organizations that were less mature and a bit less effective were managing to keep up, because things weren't so tight and there was plenty of fat left. What I'm seeing now is that those organizations that implemented ITSM are getting further and further ahead of their competition.

For organizations that are not managing their IT services effectively toward the end of the slump, it's going to be really difficult. Some organizations will start to grow fast and pick up business, and they are going to carry on shrinking.

Hanna: If ITIL has been implemented correctly, then it is not an overhead. As times get tough, it's not something you turn off. It becomes part of what you do day-to-day, and you gain those improvements and efficiencies over time. You don't need to stop doing it. In fact, it's just part of what you do.

... We've gone from managing technology processes, which was certainly an improvement, to managing end-to-end IT service and its lifecycle and focusing on the business outcome. It's not just which technology we are supporting and what silos we might be in. We need to worry about what the outcome is on the business. The starting point should be the outcome, and everything we do should be designed to achieve what's wanted.

Cannon: In terms of trends like cloud, what you're seeing is a focus on exactly what it is that I get out of IT, as opposed to a focus from the business on the internal workings of IT.

... What things like cloud tend to do is to provide business with a way of relating to IT as a set of services, without needing to worry about what's going on underneath the surface. So, business is going to look for clear solutions that meet their needs and can change with their needs in very short times.

They still have to worry about managing the technology. These issues don't go away. It really is just a different way of dealing with the sourcing and resourcing of how you provide services.

... Businesses need to be able to react quickly and ... to be very flexible within a rapidly changing, volatile economy. So, business is going to look for clear solutions that meet their needs and can change with their needs in very short times.

Hanna: An issue that comes up quite a lot is that ITIL Version 3 appears to have gotten much bigger and more complex. Some people look at it and wonder where the old service delivery and service support areas have gone, and they've taken surprise by the size of V3 and the number of core books.

When Version 3 came out, it launched with a much bigger perspective right from the beginning. Instead of having just two things to focus on, there are five core books. I think that has made it look much bigger and more complex than Version 2.

It is true that if you go through education, you do need to get your head around the new service life-cycle concept and the concept called "business outcomes," as we've already mentioned. And, you need to have an appreciation of what's unique to the five core books. But, these changes are long awaited and they're very useful additions to ITIL, and complementary to what we've already learned before.

Rance: If you look at financial management in ITIL Version 3, it says you really have to understand the cost of supplying each service that you supply and you have to understand the value that each of those services delivers to your customers.

Now, that's a very simple concept. If you think of it in a broader context, you can't imagine, say, a car manufacturer who didn't know the cost of producing a car or the value of that car in the market. But, huge numbers of IT service providers really don't understand the cost of using its service and the value of that service in the market.

ITIL V3 very much focuses on that sort of idea -- really understanding what we are doing in terms of value and in terms of cost-effectiveness of that level, rather than that procedural level.

Cannon: Financial management really hasn't changed in the essence of what it is. Financial management is a set of very well defined disciplines. Within Version 3, the financial management questions become more strategic. How do we calculate value? How do we align the cost of a service with the actual outcome that the business is trying to achieve? How do we account for changing finances over time?

Rance: A lot of businesses are in the service business themselves. It might not be IT service, but many of the customers we're dealing with are in some kind of service business, whether it's a logistics business or a transport business. Even a retailer is in the service businesses, and they provide goods as well.

In order to run any kind of a service you need to have service management. You need to manage your incidents, problems, changes, finances, and all those other things. What I'm starting to see is that things that started within the IT organization -- incident management, problem management and change management -- some of my better customers are now starting to pick up within their business operations.

They're doing something very much like ITIL incident management, ITIL change management, or ITIL problem management within the business of delivering the service to their customers.

Hanna: If you're running yourself as a business, you need to understand the business or businesses you serve, and you need to behave in the same way.
Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at

Listen to the podcast. Find it on iTunes/iPod and View a full transcript, or download the transcript. Learn more. Sponsor: Hewlett Packard.

Thursday, August 20, 2009

Compuware weighs in on portfolio management that rationalizes IT budgets in tough economy

Listen to the podcast. Download or read a full transcript. Find it on iTunes/iPod and Learn more. Sponsor: Compuware.

The current economic downturn highlights how drastically businesses and their IT operations need to change -- whether through growth, reductions, or transformation (or all three).

As IT budgets react to such change, leaders need to better understand how to manage such change holistically, and not have change manage them (or worse).

One strong way to be on top of change is by employing IT portfolio management techniques, products, and processes. To learn more about helping enterprises better manage their IT costs and priorities while preparing for flexible growth when the economic tide turns, I recently interviewed Lori Ellsworth, vice president of Changepoint Solutions at Compuware, and David A. Kelly, senior analyst at Upside Research.

Here are some excerpts:
Kelly: It's really hard to improve, if you don't have a way to measure how you're doing, or a way to set goals for where you want to be. That's the idea behind IT portfolio management, as well as project portfolio management (PPM). ... [Leaders need to] take the same type of metrics and measurements that organizations have had in the financial area around their financial processes and try to apply that in the IT area and around the projects they have going on.

[IT portfolio management] measures the projects, as well as helps try to define a way to communicate between the business side of an organization that's setting the goals for what these projects or applications are going to be used for, and the IT side of the organization, which is trying to implement these. And, it makes sure that there are some metrics, measurements, and ways to correlate between the business and IT side.

Ellsworth: IT organizations now are moving toward acting in a more strategic role. Things are changing rapidly in the business environment, which means the organizations that they're serving need to change quickly and they are depending on, or insisting on, IT changing and being responsive with them.

It's essential that IT watch what's going on, participate in the business, and move quickly to respond to competitive opportunities or economic challenges. They need to understand everything that's under way in their organization to serve the business and what they have available to them in terms of resources -- and they need to be able to collaborate and interact with the business on a regular basis to adjust and make change and continue to serve the business.

If IT wants to engage in a conversation about moving investments, about stopping something they're working on so they can respond to a market opportunity, for example, they need to understand who are the people, what is the cost, and where can we make changes to respond to the business. ... This isn't about IT deciding on different projects they could work on and what benefit it might deliver to the business. The business is at the table, collaborating, looking at all the potential opportunities for investment, and reaching agreement as a business on what are the top priorities.

Kelly: The other thing that's needed is consistency. When you're making these kinds of decisions, for a lot of IT organizations and organizations in general, if times are good, you can make a lot of decisions in an ad hoc fashion and still be pretty successful.

But, in dynamic and more challenging economic times, you want the decisions that you or other people on the IT team, as well as the business, are making to be consistent. You want them to have some basis in reality and in an accepted process. You talked about metrics here and what kind of metrics can you provide to the Chief Operating Officer.

You need consistency in these dynamic times and also you need a way to collaborate.

Ellsworth: There are a couple of problems with manual processes. They're very labor-intensive. We've talked about responsiveness. We need information to drive decision-making. So, the moment we rely on individual efforts or on people who have to go out and sit through meetings and collect data, we're not getting data that we can necessarily trust. We're not getting data that is timely to your point and we're not able to make those decisions to be responsive.

You end up with a situation where very definitely your resources are busy and fully deployed, but they're not necessarily doing the right things that matter the most to the business. That data needs to be real-time, so that, at multiple levels in the organization, we can be constantly assessing the health and the alignment in terms of what IT is doing to deliver to the business, and we have the information to make a change.

Kelly: To me, it's analogous to what we saw maybe 10 years ago in software development, when a whole bunch of automated testing tools became available, and organizations started to put a lot of emphasis in that area.

As you're developing an application, you can certainly test it manually and have people sitting there testing it, but when you can automate those processes they become more consistent. They become thorough, and they become something that can be done automatically in the background.

We're seeing the same thing when it comes to managing IT applications and projects, and the whole situation that's going on in the IT area.

When you start looking at IT portfolio management, that provides the same kind of automation, controls, and structure by which you can not only increase the quality of the decisions that are being made, but you can also do it in a way that almost results in less overhead and less manual work from an organization.

... Areas such as legacy transformation or modernization are good for this, because you do have to make a lot of decisions ... where you need to gain consensus. [IT portfolio management] can certainly help deliver that return on investment (ROI) much faster.

Ellsworth: It's also an opportunity to reduce the total number of applications, and the follow-on is an approach to being more efficient or investing in the applications that are strategic to the business.

It sounds pretty basic, but the moment an organization starts to inventory all of the projects that are under way and all of the applications that are deployed in production serving the business, even just that simple exercise of putting them in a single view and maybe categorizing them very simply with one or two criteria, quite quickly allows organizations to identify those rogue projects that were under way.

... They will quickly learn, "We thought we had 100 applications, and we've now discovered there are 300." They'll also quickly identify those applications that no one is using. There is some opportunity to start pulling back the effort or the cost they're investing in those activities and either reducing the cost out of the business or reinvesting in something that's more important to the business.

... I'm also seeing an increased interest in participation, from a finance perspective, outside the IT organization. Often, the Chief Information Officer (CIO) and the executive in the finance area are working together.

The line of business executives -- the customers, if you will, the CIO -- are starting to be more mature, if I can use that expression in terms of their understanding of technology and of how they should be working with technology and driving that collaboration. So, there is some increased executive involvement even from outside IT, from the CIO's peers.

... IT needs to recognize that there are competitive alternatives, and certainly, if IT isn't delivering, the business will go and look elsewhere. In some simple examples, you can see line-of-business customers going out and engaging with a software-as-a-service (SaaS) solution in a particular area, because they can do that and bypass IT.

If they're not making the right decisions and doing the things that have the highest return to the business or if they are delivering poorly, it's really about missed opportunity and lower ROI.

Kelly: If you can do some application consolidation, you may be able to consider new deployment opportunities and cloud-based solutions. It will make the decision-making process within IT more nimble and more flexible, as well as enable them to respond more quickly to the line of business owners and be able to almost empower them with the right information and a structured decision-making process.
Listen to the podcast. Download or read a full transcript. Find it on iTunes/iPod and Learn more. Sponsor: Compuware.

SpringSource Enterprise Java Cloud Foundry mixes best of open source with PaaS for application lifecycle efficiency

Take the BriefingsDirect middleware/ESB survey now.

pringSource made headlines last week when VMware scooped up the Java infrastructure and management firm for $420 million in a move to breed easier cloud migration. Now, the spotlight is on the San Mateo, Calif. company once again as it leverages one of its own recent cloud industry acquisitions.

On Wednesday, SpringSource rolled out a beta of Cloud Foundry, an enterprise Java cloud offering that lets developers deploy and manage Spring, Grails and Java applications in a public cloud environment.

SpringSource is essentially offering a self-service, pay-as-you-go, public cloud deployment platform on which to build, run and manage the entire Java Web application lifecycle. Nice! Cloud Foundry promises to launch and automatically scale Java Web applications in the cloud with a few clicks of a mouse.

This is the clear path for open source and Java developers to the cloud. Microsoft will have its hands full just keeping the .NET developers and operators on the farm, so to speak.

The ability to develop Java applications in the cloud quickly with quality only further eases the deployment of Java applications into cloud containers, either internal, external or both. This must be VMware's thinking ... get the developers on board, and the operators will follow. It's worked before. Only this time it's the virtualized container that's the target -- the cloud OS, rather than the platform OS. And it's the cloud container that now benefits from the tools-to-target synergy.

This also makes moot the rip and replace argument against changing from installed platforms (like Windows). When you're moving the runtime up into a cloud, you don't care what the underlying platform is. You want to be able to develop well, and then get your operations requirements met on on performance, security and cost.

Because these are Java applications, this will appeal to the mission-critical apps set along those requirements. When enterprise CIOs begin to gain the insights into IT financial management of their traditional development and deployment strategies -- and then compare and constrast to these cloud lifecycle methods and costs -- the worm then turns.

The vision we're seeing from VMware and others speaks to dramatically cutting the total and ongoing cost of IT when the full development and deployment equation is factored. It's about Moore's Law moving off of the silicon and up and into the clouds.

Rod Johnson, CEO of SpringSource, is bragging about the benefits of Cloud Foundry:
“Unlike competitive offerings, our cloud service does not come with compromises; companies can deploy full-feature Java Web applications, built using SpringSource tools. C-level technology executives can seamlessly add cloud computing as a strategic option as part of their development roadmap.”

SpringSource is once again demonstrating the power of open source in the cloud by adding another piece of the “Java in the cloud” puzzle. Cloud Foundry plays off the strength of SpringSource core technologies. But SpringSource is also leveraging technologies from other developers to flesh out the big picture.

For example, SpringSource will rely on Hyperic CloudStatus to gain cloud health monitoring data. SpringSource will also tap Hyperic HQ-powered functionality to offer insights into application performance and service levels. Hyperic HQ works with Cloud Foundry’s technology to automatically scale cloud deployments by understanding how applications are working and interacting with other IT resources.

The VMware Connection

Of course, SpringSource holds several pieces of the “Java in the cloud” puzzle internally. Beyond Cloud Foundry, there's SpringSource’s tc Server. Based on Apache Tomcat, it provides a lightweight container for deploying Java Web applications in the cloud. SpringSource is also ramping up quickly to make its Tool Suite available within the next 90 days. The Tool Suite will offer direct deployment of Java applications—through Cloud Foundry—into the public cloud.

How does this fit into VMware? SpringSource plans to bring Cloud Foundry's capabilities to VMware's vCloud service provider partners and internal VMware vSphere environments to offer infrastructure choice, deployment flexibility and enterprise services.

SpringSource will offer the same capabilities to Amazon Web Services, and plans to enrich Cloud Foundry’s capabilities with enhanced cloud management features and new services in the coming months.

Take the BriefingsDirect middleware/ESB survey now.

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached here and here.

Tuesday, August 18, 2009

BriefingsDirect analysts discuss Software AG-IDS Scheer acquisition and lackluster prospects for Google Chrome OS

Listen to the podcast. Download or view a full transcript. Find it on iTunes/iPod and Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at

Take the BriefingsDirect middleware/ESB survey now.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 44. Our topic this week on BriefingsDirect Analyst Insights Edition, and it is the week of July 13, 2009, centers on Software AG's bid to acquire IDS Scheer for about $320 million. We'll look into why this could be a big business process management (BPM) deal, not only for Software AG, but also for the service-oriented architecture (SOA) competitive landscape that is fast moving, as we saw from Oracle's recent acquisition of Sun Microsystems.

Another topic for our panel this week is the seemingly inevitable trend toward Web oriented architecture (WOA), most notably supported by Google's announcement of the Google Chrome operating system (OS).

Will the popularity of devices like netbooks and smartphones accelerate the obsolescence of full-fledged fat clients, and what can Google hope to do further to move the market away from powerhouse Microsoft? Who is the David and who is the Goliath in this transition from software plus services to software for services?

Here to help us better understand Software AG's latest acquisition bid and the impact of the Google Chrome OS are our analysts this week. We are here with Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Brad Shimmin, principal analyst at Current Analysis; Jason Bloomberg, managing partner at ZapThink; JP Morgenthal, independent analyst and IT consultant; and Joe McKendrick, independent analyst and ZDNet and SOA blogger. The discussion is moderated be me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Morgenthal: The acquisition seems to be focused heavily on IDS Scheer's association with SAP, and that the move seems to be driven by more of a business relationship than a technical relationship. If you look at the platforms, there is some overlap between the webMethods platform and the ARIS platform.

So, it would make sense that, if they were going after something, it wouldn't be just more design functionality. There has to be something deeper there for them to grow that business even larger, and certainly SAP is a good target for going after more additional business.

SAP probably doesn't believe that they need an SOA partner, but I think that the fish are starting to nip around the outer boundaries. SAP customers are to the point now, where they are looking for something more immediate, and obviously the redevelopment of SAP as a complete SOA architecture is a long-term endeavor.

So, how do you start moving there in an incremental fashion? A lot of SOA platform vendors are starting to identify that there is a place for them on the outer edges, until SAP gets to make its full transformation.

The combined effort of a Software AG with webMethods and IDS Scheer actually becomes one of the feeders on the outer edges of the SAP market. While SAP is in its cocoon, it needs to turn from caterpillar into SOA butterfly, and heaven knows whether that will actually survive that transformation.

There are a lot of SOA platforms starting to eat at the outer edges of the cocoon, feeding off of that, and hoping the transformation either fails or that there will be a place for them when the SOA butterfly emerges.

Kobielus: What's really interesting here is that, clearly Software AG is on a tear now to build up their whole SOA stack. ... People didn't realize that IDS Scheer is actually now a business intelligence (BI) vendor. They've got a self-service mashup BI product called ARIS MashZone, in addition to the complex event processing (CEP) product and an in-memory analytics product.

IDS Scheer, prior to this acquisition, has been increasingly positioning themselves in the new generation of BI solutions. That's been the one area where Software AG/webMethods has been deficient, from my point of view. In these SOA wars, they're lacking any strong BI or CEP capabilities.

Now, IDS Scheer, their BI, their CEP, and their in-memory analytics is all tied to business activity monitoring (BAM), and all tied to BPM. So, it's not clear whether or when Software AG, with IDS Scheer on board, might start turning all of that technology or adapting it to be more of a general purpose BI CEP capability. But, you know what, if they choose to do that, I think they've got some very strong technologies to build upon.

Baer: You can't separate the technology from the strategic implications of this deal. ... There are other dimensions to this deal, which is that Software AG's webMethods business gets a much deeper process-modeling path. I don't know how redundant it is with the existing modeling. I don't think there are many BPM modeling languages that are deeper than ARIS, and that's selling pretty awesomely. As a matter of fact, you can look at Oracle, which uses it as one of the paths to modeling business process, along with the technology they picked up from BEA.

For Software AG, [the acquisition gives them] immediate access to the SAP base, and that's huge. It also basically lays down a gauntlet to IBM and Oracle, especially Oracle, which has an OEM agreement [with IDS Scheer]. All of a sudden they have an OEM agreement with a major rival, as they're trying to ramp up their Fusion middleware business and their SOA governance story.

Shimmin: Look to the governance. About two years ago, most of the vendors were OEM. That certainly has turned around, such that these vendors are now very much providing in-house stacks. That's why I think this is such a big deal, and, as Tony was saying, why it's so disruptive.

It's not just that they have a fuller stack now, but there is a more complete stack for SAP customers. NetWeaver has been hanging in there. SAP definitely thinks it is middleware, but then why else would there be so many players on the outside, providing integration services for SAP applications running on not NetWeaver

It's now a class society, where you have the big players -- the IBMs, Oracle, SAPs, and now Software AGs of the world -- and then you have the rogue players in these open-source space that are coming up, that have room to play. ... When you have this really bifurcated environment, it gives you fewer acquisitions and more competition, and that's what's going to be great for the industry. I don't see this as leading to further consolidation at the top end. It's going to be more activity on the bottom end.

Bloomberg: This IDS Scheer announcement really doesn't have anything to do with SOA. That is surprising, in a way, but also consistent with some of the fundamental disconnect we see within Software AG, between the integration folks on one hand and the BPM folks on the other.

There are some people within Software AG, typically the CentraSite team, Miko Matsumura and his strategy team, who really understand the connection between SOA and BPM. But, for the most part, basically the old guard, the German staff, just doesn't see the connection.

If you read the BPM For Dummies Book that Software AG put together, for example, they don't even understand that SOA has any connection to BPM. Software AG released a press release a few weeks ago that described SOA as a technology. Whoever wrote the press release doesn't even understand that SOA is architecture. It makes you wonder where the disconnect is.

With the IDS Scheer acquisition, if you read through what Software AG is saying about this, they're not connecting it with their SOA story. This is part of their BPM story. This is a way for them to build their vertical BPM expertise. That's the missing piece.

Kobielus: Let me butt in a second, because in Forrester we've been discussing this. We don't think that Software AG understands fully who they are acquiring, because they don't really fully understand what IDS Scheer has on the SOA side. They don't understand the BI and CEP stuff.

So, I agree wholeheartedly with what Jason is saying. They're acquiring them just for the BPM, but that really in many ways really understates what IDS Scheer potentially can offer Software AG.

Bear: There has always been a huge cultural divide between the business folks, who felt that they own BPM, versus the IT folks, who own the architecture or the technology architecture, which would be SOA. What’s really interesting and what's going to stir up the pot some more -- and this is still on the horizon -- is BPMN 2.0, which is supposed to support direct execution.

Bloomberg: You're right that a lot of organizations still see SOA as technical architecture, as something distinct from the BPM, and those are the organizations that are failing with SOA. That part of the "SOA is dead" straw man is that misconception of SOA as about technology. That's what’s not working well in many organizations.

On the plus side, there are a number of enterprises that do understand this point, are connecting business process with SOA, and understand really that you need to have a process driven SOA approach to enterprise architecture.

Kobielus: What gives me hope on the Software AG-IDS Scheer merger is the fact that what I heard on the briefing is that Software AG realizes they need to shift from a technology and sales driven model towards more of a solution and consulting driven business model. First of all, that's the way that you lock in the customer in terms of a partnership or an ongoing relationship to help the customer optimize their business and chief differentiation in their business.

What I found really the most valuable thing about the briefing on the acquisition that we got from them the other day was IDS Scheer adding significant value to Software AG. Software AG pointed to the business process tools under ARIS. That's a given. They focused even more on the EA modeling capabilities that IDS Scheer has, and even more on the professional services on the vertical solution side and the BPA consulting side -- consulting, consulting, consulting, relationship building, solution marketing.

On Google Chrome OS ...

Shimmin: I just think it's reflective of the shift that's already under way. When you look at Google Chrome OS, it's Linux, which is a well-established OS, but certainly not something you would call a web-oriented OS. Chrome OS is really something akin to GNOME or KDE running on top of it. So, technologically, this is nothing spectacularly new.

I think that what Google is doing, and what is brilliant about what they're doing, is that they're saying, "We are the architectural providers of the web, people who make the pipes go, and make all of you able to get to the places you want to go in the web through our index. We're going to build an OS that's geared toward you folks. We're going OEM and through vendors that are building netbooks, that are definitely making a point of contention with Microsoft. Because Microsoft, as we know, is really not pleased with the netbook vendors, because they can't run Vista or eventually Windows 7."

Morgenthal: I have differing opinion, and of course an opportunity to tick off the entire Slashdot audience. Everyone thinks this is an attack at Microsoft. I'm looking at it as a Mac user and see a huge hole in the market. I've got to pay almost $2,000 for a really good high-powered Macintosh today. All they did was take BSD Unix and really soup it up so that your basic user can use it.

People on the Linux side are like, "Oh, Linux is great now. It's really usable." I've got news for you. It's no way nearly as usable as Windows or the Mac. As far as usability, Linux is still growing out of the proverbial slime.

But, if you take that concept of what Apple did with BSD and you say, "Hmm, I'm going to do that. I'm going to take Linux as my base and I'm going to really soup up the UI. I'm going to make it really oriented around the network, which I already did, and I have a lot of my apps in the Cloud, I don't necessarily need to build everything large scale. I still need to have the ability to do video, tie things in, and make that usable, but I'm also going to be able to sell it on a $400 netbook computer."

Now, you're right down the middle of the entire open market, because people can't stand Windows XP running on these netbooks. As was previously said, you can't yet run Windows 7 yet or Vista. We don't know what Windows 7 is going to look like, as far as usability, and the Mac is costing way too much.

There is a huge home run right through the middle. You just run right up the center and you've got yourself a massive home run. It doesn't have to be about going after the enemy. It's not about hurting the enemy. It's about going after your competitors.

... If you can deliver the equivalent of an Apple-based set of functionality and the usability of the Mac on a $400 netbook, or a bigger if you want, you hurt Apple. You don't hurt Windows.

Kobielus: People keep expecting the big "Google hegemony" to evolve or to burst out, so everybody keeps latching onto these kinds of announcements as the harbinger of the coming Google hegemony and all components of the distributed internet-work Web 2.0 world. I just don't see that happening.

They've got all these kinds of projects going, but none of them has even begun to deliver for Google anything even approximating the revenue share that they get from search-driven advertising.

So, this is interesting, but a lot of Google projects are interesting. Google Fusion Tables are interesting for analytics, but I just can't really generate a big interest in this project, until I see something concrete.

Shimmin: I am sorry to interrupt you, but Apple has netbook coming out in October too, so they're trying for that market as well.

Baer: I'll grant you that point. The important thing mostly is that it does point to a new diversity of clients. Some may need netbooks. Some may want smartphones. Some, like myself, still deal with regular brick computers. It's just a diversity.
Listen to the podcast. Download or view a full transcript. Find it on iTunes/iPod and Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at

Take the BriefingsDirect middleware/ESB survey now.


SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at

BriefingsDirect user survey helps define latest ESB trends, middleware use patterns

Take the BriefingsDirect middleware/ESB survey now.

Forgive my harping on this, but I keep hearing about how powerful social media is for gathering insights from the IT communities and users. Yet I rarely see actual market research conducted via the social media milieu.

So now's the time to fully test the process. I'm hoping that you users and specifiers of enterprise software middleware, SOA infrastructure, integration middleware, and enterprise service buses (ESBs) will take 5 minutes and fill out my BriefingsDirect survey. We'll share the results via this blog in a few weeks.

We're seeking to uncover the latest trends in actual usage and perceptions around these technologies -- both open source and commercial.

How middleware products -- like ESBs -- are used is not supposed to change rapidly. Enterprises typically choose and deploy integration software infrastructure slowly and deliberately, and they don't often change course without good reason.

But the last few years have proven an exception. Middleware products and brands have shifted more rapidly than ever before. Vendors have consolidated, product lines have merged. Users have had to grapple with new and dynamic requirements.

Open source offerings have swiftly matured, and in many cases advanced capabilities beyond the commercial space. Interest in SOA is now shared with anticipation of cloud computing approaches and needs.

So how do enterprise IT leaders and planners view the middleware and SOA landscape after a period of adjustment -- including the roughest global recession in more than 60 years?

This brief survey, distributed by BriefingsDirect for Interarbor Solutions, is designed to gauge the latest perceptions and patterns of use and updated requirements for middleware products and capabilities. Please take a few moments and share your preferences on enterprise middleware software. Thank you.

Take the BriefingsDirect middleware/ESB survey now.

Monday, August 17, 2009

Open Group forms Cloud Work Group to spur enterprise cloud adoption and security via open standards

This guest blog comes courtesy of Dave Lounsbury, vice president of government programs and managing director of research and technology at The Open Group, where he leads activities related to government research, adaptive and real-time system software, and cloud computing. He can be reached here.

Take the BriefingsDirect middleware/ESB survey now.

By Dave Lounsbury

Like so many others, The Open Group has been busy for the past year figuring out our place in the cloud. With the great work already being done by industry groups like the Cloud Computing Interoperability Forum, CloudCamp and the Cloud Security Alliance, we have given great thought and consideration to how we can best add value to this evolving area. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

The growth in cloud computing has resulted in a diverse array of technical capabilities, and companies of all sizes are trying to understand how to take advantage of them in their business operations. We saw this as an opportunity to bring both vendors and end-users together with an eye toward providing guidance for adopting and implementing cloud computing in a way that helps ensure that organizations get the business benefits promised by these new capabilities.

Over the past year, The Open Group’s members have engaged in focused work to identify end-user requirements for cloud computing, identifying needs in security and identity, standards to prevent lock-in, skills in management of cloud outsourcing, and the need for enterprise architecture models for cloud. As a culmination of this, I am pleased to announce that we have officially formed our own Cloud Work Group.

We have taken what we’ve learned from our London and Toronto conferences to create a group

The Cloud Work Group is in a unique position to develop a common understanding between buyers and suppliers of how companies can use cloud products and services in a flexible and secure way to realize its full potential.

that we believe truly reflects the importance of cloud computing to The Open Group members and industry at large. Our main goal is to ensure the effective and secure use of cloud computing in enterprise architectures, given The Open Group’s experience driving vendor-neutral standards and certification programs in and around enterprise architecture.

The Cloud Work Group is in a unique position to develop a common understanding between buyers and suppliers of how companies can use cloud products and services in a flexible and secure way to realize its full potential. By focusing on customer input and drawing on the diverse views of our global members, we intend to bring a somewhat understated perspective to the discussion – that of the end-user.

Our first deliverable will be to publish a Business Scenario for Enterprise Cloud Computing, based on end-user requirements discussed at The Open Group’s latest Enterprise Architecture Conference in Toronto. During a business scenario workshop, led by MITRE’s Terry Blevins, we brainstormed and discussed the cloud’s most critical business requirements, as well as “pain points”. As Sandy Kemsley summarizes in her blog post, The Enterprise Cloud Business Scenario will help companies identify and understand business needs relative to cloud computing and thereby derive the requirements that the architecture development must address.

This is an exciting time for us as we collaborate with some of the industry’s leading cloud providers and end-user organizations to ensure both sides are in sync and able to reap the rewards as a result. The direction of the group is determined by Open Group members, but participation is welcomed from all organizations that wish to understand or contribute to the development of best practices for enterprise use of cloud computing.

To get involved or for more information, please visit: We hope you will join us!

Take the BriefingsDirect middleware/ESB survey now.

This guest blog is courtesy of Dave Lounsbury, vice president of government programs and managing director of research and technology at The Open Group, where he leads activities related to government research, adaptive and real-time system software, and cloud computing. He can be reached here.

Understanding the value of reference architectures in the SOA story

This guest post comes courtesy of ZapThink. Ron Schmelzer is a senior analyst at ZapThink. You can reach him here.

Take the BriefingsDirect middleware/ESB survey now.

By Ron Schmelzer

There's nothing more that architects love to do than argue about definitions. If you ever find yourself with idle time in a room of architects, try asking for a definition of "service" or "architecture" and see what sort of creative melee you can start.

That being said, definitions are indeed very important so that we can have a common language to communicate the intent and benefit of the very things we are trying to convince business to invest in. From that perspective, a number of concepts have emerged in the past decade or so that have become top of mind for self-styled enterprise architects: architecture frameworks and reference architectures.

In previous ZapFlashes, we discussed architecture frameworks, which leaves the topic of reference architectures left untouched by ZapThink. Since we can't leave a good argument behind, we're going to use this ZapFlash to explore what reference architectures are all about and what value they have to add to the Service-Oriented Architecture (SOA) story.

What is a reference architecture?

One commonly accepted definition for reference architecture
is that it provides a methodology and/or set of practices and templates that are based on the generalization of a set of successful solutions for a particular category of solutions. Reference architectures provide guidance on how to apply specific patterns and/or practices to solve particular classes of problems. In this way, it serves as a "reference" for the specific architectures that companies will implement to solve their own problems. It is never intended that a reference architecture would be implemented as-is, but rather used either as a point of comparison or as a starting point for individual companies' architectural efforts.

Others refine the definition of reference architecture
as a description of how to build a class of artifacts. These artifacts can be embodied in many forms including design patterns, methodologies, standards, metadata, and documents of all sorts. Long story short, if you need guidance on how to develop a specific architecture based on best practices or authoritative sets of potential artifacts, you should look to a reference architecture that covers the scope of the architecture that you're looking to build.

One of the most popular examples of reference architectures in IT is the Java Platform Enterprise Edition (Java EE) architecture, which provides a layered reference architecture and templates addressing a range of technology and business issues that have guided many Java-based enterprise systems.

Reference architectures vs. architecture frameworks

While the above definition(s) may seem fairly cut and dried, there is a lot in common between the concepts of reference architectures and architecture frameworks. For some, this is where things get dicey and definitions get blurry. Architecture frameworks, such as the Zachman Framework, the Open Group Architecture Framework (TOGAF), and Department of Defense Architecture Framework (DoDAF) provide approaches to describe and identify necessary inputs to a particular architecture as well as means to describe that architecture. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

If a particular architecture is a cookbook that provides guidance on how to go about solving a particular set of problems with a particular approach, an architecture framework is a book about how to write cookbooks. So, architecture frameworks give enterprise architects the tools they need to adequately describe and collect requirements, without mandating any specific architecture type. More specifically, architecture frameworks describe an example taxonomy of the kinds of architectural "views" that an architect might consider developing, and why, and provides guidelines for making the choice for developing particular views.

This differs from the above concept of a reference architecture in that a reference architecture

Both frameworks and RAs provide best practices, and while it might be argued that RAs provide more of a methodology than a framework does, RAs are still not really characterized by their methodology component

goes one step further by accelerating the process for a particular architecture type, helping to identify which architectural approaches will satisfy particular requirements, and figuring out what a minimally acceptable set of architectural artifacts are needed to meet the "best practices" requirements for a particular architecture. To continue our analogy with cookbooks, if an architecture framework is a book on how to write cookbooks, then a reference architecture is a book that provides guidance and best practices on how to write cookbooks focused on weight loss, for example. This would then mean that the particular architecture you develop for your organization would be a specific cookbook that provides weight-loss recipes targeted to your organization. Indeed, if you get puzzled with the definitions, replacing the term "architecture" with "cookbook" is helpful: cookbook frameworks, reference cookbooks, and your particular cookbook.

Furthermore, most reference architectures emphasize the "template" part of the definition of a reference architecture. Both frameworks and RAs provide best practices, and while it might be argued that RAs provide more of a methodology than a framework does, RAs are still not really characterized by their methodology component. Most can be characterized by their template component, however. From this perspective, patterns are instances of templates in this context. In fact, multiple reference architectures for the same domain are allowable and quite useful. Reference architectures can be complementary providing guidance for a single architecture, such as SOA, from multiple viewpoints.

The value of a SOA reference architecture

In many ways, SOA projects are in desperate need of well-thought out reference architectures. ZapThink sees a high degree of variability in SOA projects. Some flourish and succeed while others flounder and fail. Many times the reason for failure can be traced to bad architectural practices, premature infrastructure purchasing, and inadequate governance and management. Other times the failure is primarily organizational. However, what is common in most successes is well-documented and/or communicated architectural practices and a systematic method for learning from one's mistakes and having a low cost of failure.

Furthermore, we find that many architects spend a significant amount of their time researching, investigating, (re-)defining, contemplating, and arguing architectural decisions. In many cases, these architects are reinventing the wheel as their peers in other companies, or even the same company, have already spent that time and effort defining their own architectural practices. This extra effort is not only inefficient, but also prevents the company from learning from its own experiences and applying that knowledge for increased effectiveness.

From this perspective, SOA reference architectures can provide some help to those struggling

While the OASIS SOA Reference Architecture is certainly not the only valid one on the block, it certainly makes a good starting point for those looking for a vendor-neutral SOA reference architecture on which to base their own architectural efforts.

with their SOA efforts or thinking about launching a new one. SOA reference architectures allow organizations to learn from other architects' successes and failures and inherit proven best practices. Reference architectures can provide missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices. In this way, the SOA reference architecture provides a base of assets that SOA efforts can draw from throughout the project lifecycle.

Indeed, in order to gain the promised SOA benefits of reuse, reduced redundancy, reduced cost of integration, and increased visibility and governance, companies need to apply their SOA efforts in a consistent manner. This means more than buying and establishing some vendor's infrastructure as a corporate standard or adhering to the latest WS-* standards stack. SOA reference architectures can serve as the basis for disparate SOA efforts throughout the organization, even if they use different tools and technologies. Good SOA reference architectures provide SOA best practices and approaches in a vendor-, technology-, and standards-independent way. Therefore, don't go hunting for one from your favorite vendor of choice. In fact, if you got your SOA reference architecture from that vendor, you might want to consider dropping it in lieu of something more vendor-neutral.

In particular, OASIS offers a SOA Reference Architecture (RA) that "models the abstract architectural elements for a SOA independent of the technologies, protocols, and products that are used to implement a SOA. Some sections of the RA will use common abstracted elements derived from several standards." Their approach uses the concept of "patterns" to identify different methods and approaches for implementing different parts of the architectural picture. While the OASIS SOA Reference Architecture is certainly not the only valid one on the block, it certainly makes a good starting point for those looking for a vendor-neutral SOA reference architecture on which to base their own architectural efforts.

The ZapThink take

Enterprise architects needs all the help they can get to make sure that they deliver reliable, agile, resilient, vendor-neutral architectures to their organization that meet the continuously changing requirements of the business. While certainly the art and practice of enterprise architecture continues to mature, companies should look to borrow as much best practices as they can and learn from others who have already gone down the EA and SOA path. If you plan to learn SOA, or any form of EA for that matter, as you go along, or even worse, from a vendor, then you risk the entire success of your SOA efforts. Rather, leverage (for free) SOA reference architectures so that you can advance at a faster pace and lower risk.

Bernard of Chartres put it best in the well-known saying: "We are like dwarfs on the shoulders of giants, so that we can see more than they, and things at a greater distance, not by virtue of any sharpness of sight on our part, or any physical distinction, but because we are carried high and raised up by their giant size." Stand on the shoulders of other enterprise architecture giants and let them increase your vision and success.

This guest post comes courtesy of ZapThink. Ron Schmelzer, a senior analyst at ZapThink, can be reached here.

Take the BriefingsDirect middleware/ESB survey now.


SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at