Thursday, March 12, 2009

BriefingsDirect analysts discuss solutions for bringing human interactions into business process workflows

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 37, a periodic discussion and dissection of software, services, SOA and compute cloud-related news and events with a panel of IT analysts.

In this episode, recorded Feb. 13, 2009, our guests examine the essential topic of bringing human activity into alignment with standards-based IT supported business processes. We revisit the topic of BPEL4People, an OASIS specification.

The need to automate and extend complex processes is obvious. What's less obvious, is the need to join the physical world of people, their habits, needs, and perceptions with the artificial world of service-oriented architecture (SOA) and business process management (BPM).

This interaction or junction will become all the more important as cloud-based services become more common.

Our discussion, moderated by me, includes noted IT industry analysts and experts Michael Rowley, director of technology and strategy at Active Endpoints; Jim Kobielus, senior analyst at Forrester Research; and JP Morgenthal, independent analyst and IT consultant.

Here are some excerpts:
Rowley: [With BPEL4People] you can automate the way people work with their computers and interact with other people by pulling tasks off of a worklist and then having a central system, the BPM engine, keep track of who should do the next thing, look at the results of what they have done, and based on the data, send things for approval.

It basically captures the business process, the actual functioning of a business, in software in a way that you can change over time. It's flexible, but you can also track things, and that kind of thing is basic.

... One of the hardest questions is what you standardize and how you divvy up the standards. One thing that has slowed down this whole vision of automating business process is the adoption of standards. ... The reason [BPM] isn't at that level of adoption yet is because the standards are new and just being developed. People have to be quite comfortable that, if they're going to invest in a technology that's running their organization, this is not just some proprietary technology.

The big insight behind BPEL4People is that there's a different standard for WS-Human Task. It's basically keeping track of the worklist aspect of a business process versus the control flow that you get in the BPEL4People side of the standard. So, there's BPEL4People as one standard and the WS-Human Task as another closely related standard.

By having this dichotomy you can have your worklist system completely standards based, but not necessarily tied to your workflow system or BPM engine. We've had customers actually use that. We've had at least one customer that's decided to implement their own human task worklist system, rather than using the one that comes out of the box, and know that what they have created is standards compliant.

All of the companies involved -- Oracle, IBM, SAP, Microsoft, and TIBCO, as well as Active Endpoints -- seem to be very interested in this. One interesting one is Microsoft. They are also putting in some special effort here.

One value of a BPM engine is that you should be able to have a software system, where the overall control flow, what's happening, how the business is being run can be at the very least read by a nontechnical user. They can see that and say, "You know, we're going through too many steps here. We really can skip this step. When the amount of money being dealt with is less than $500, we should take this shortcut."

That's something that at least can be described by a lay person, and it should be conveyed with very little effort to a technical person who will get it or who will make the change to get it so that the shortcut happens.

Koblielus: It's critically important that the leading BPM and workflow vendors get on board with this standard. ... This is critically important for SOA, where SOA applications for human workflows are at the very core of the application.

... BPEL4People, by providing an interoperability framework for worklisting capabilities of human workflow systems, offers the promise of allowing organizations to help users have a single view of all of their tasks and all the workflows in which they are participating. That will be a huge productivity gain for the average information worker, if that ever comes to pass.

... One thing that users are challenged with all the time in business is the fact that they are participating in so many workflows, so many business processes. They have to multi-task, and they have to have multiple worklists and to-do lists that they are checking all the time. It's just a bear to keep up with.

Morgenthal: Humans interact with humans, humans interact with machines, and data is changing everywhere. How do we keep everything on track, how do we keep everything coordinated, when you have a whole bunch of ad-hoc processes hitting this standardized process? That requires some unique features. It requires the ability to aggregate different content types together into a single place.

One key term that has been applied here industry wide I found only in the government. They call this "suspense tracking." That's a way of saying that something leaves the process and goes into "ad hoc land." We don't know what happens in there, but we control when it leaves and we control when it comes back.

I've actually extended this concept quite a bit and I am working on getting some papers and reports written around something I am terming "business activity coordination," which is a way to control what's in the black hole.

So, you have these ongoing ad hoc processes that occur in business everyday and are difficult to automate. I've been analyzing solutions to this, and business activity coordination is that overlap, the Venn diagram, if you will, of process-centric and collaborative actions. For a human to contribute back and for a machine to recognize that the dataset has changed, move forward, and take the appropriate actions from a process-centric standpoint, after a collaborative activity is taking place is possible today, but is very difficult.

One thing I'm looking at is how SharePoint, more specifically Windows SharePoint Services, acts as a solid foundation that allows humans and machines to interact nicely. It comes with a core portal that allows humans to visualize and change the data, but the behavioral connections to actually notify workflows that it's time to go to the next step, based on those human activities, are really critical functions. I don't see them widely available through today's workflow and BPM tools. In fact, those tools fall short, because of their inability to recognize these datasets.

... I don't necessarily agree with the statement earlier that we need to have tight control of this. A lot of this can be managed by the users themselves, using common tools. ... Neither WS-Human Task nor BPEL4People addresses how I control what's happening inside the black hole.

Rowley: Actually it does. The WS-Human Task does talk about how do you control what's in the black hole -- what happens to a task and what kind of things can happen to a task while its being handled by a user. One of the things about Microsoft involvement in the standards committee is that they have been sharing a lot with us about SharePoint and we have been discussing it. This is all public. The nice thing about OASIS is that everything we do is in public, along with the meeting notes.

The Microsoft people are giving us demonstration of SharePoint, and we can envision as an industry, as a bunch of vendors, a possibility of interoperability with a BPEL4People business process engine like the ActiveVOS server. Maybe somebody doesn't want to use our worklist system and wants to use SharePoint, and some future version of SharePoint will have an implementation of WS-Human Task, or possibly somebody else will do an implementation of WS-Human Task.

Until you get the standard, that vision that JP mentioned about having somebody use SharePoint and having some BPM engine be able to coordinate it, isn't possible. We need these standards to accomplish that.

A workflow system or a business process is essentially an event-based system. Complex Event Processing (CEP) is real-time business intelligence. You put those two together and you discover that the events that are in your business process are inherently valuable events.

You need to be able to discover over a wide variety of business processes, a wide variety of documents, or wide variety of sources, and be able to look for averages, aggregations and sums, and the joining over these various things to discover a situation where you need to automatically kickoff new work. New work is a task or a business process.

What you don't want to have is for somebody to have to go in and monitor or discover by hand that something needs to be reacted to. If you have something like what we have with ActiveVOS, which is a CEP engine embedded with your BPM, then the events that are naturally business relevant, that are in your BPM, can be fed into your CEP, and then you can have intelligent reaction to everyday business.

... Tying event processing to social networks makes sense, because what you need to have when you're in a social network is visibility, visibility into what's going on in the business and what's going on with other people. BPM is all about providing visibility. ... If humans are involved in discovering something, looking something up, or watching something, I think of it more as either monitoring or reporting, but that's just a terminology. Either way, events and visibility are really critical.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Monday, March 9, 2009

Survey says: Cloud computing proving to be a two-edged sword in a down economy

Cloud computing seems to be trapped between the rock of great expectations and the hard place of low confidence. While most enterprise and IT decision makers view cloud as a way to lower capital and operational costs, the way to more aggressive cloud adoption is blocked by concerns about security and control.

This is the finding of a recent survey commissioned by IT consultancy Avanade, Inc., Seattle, Wash., and conducted by Kelton Research, Culver City, CA.

The good news is that 54 percent of people surveyed used technology to cut costs, a boon for IT providers in these turbulent economic times. According to the survey, for every two companies that cut back on technology to save money, five will adopt new technology as a way of reducing expenses.

Also encouraging is the fact that most people, 9 out of 10 C-level executives, know what cloud computing is and what it can do. More than 60 percent know that it can reduce costs, make the company more flexible, help the company concentrate on its core business as well as react more quickly to market conditions.

The bad news is that 61 percent of those surveyed aren't using cloud technologies at this time, and of those who now rely solely on internal systems, 84 percent say they have no plans to switch to cloud in the next 12 months.

Something like Garrison Keillor's mythical hometown of Lake Wobegon, where "all the children are above average," nearly two thirds of US companies surveyed consider themselves "early adopters," which raises the question of how you can be an early adopter when almost everyone else is doing it. Whether early adopter or not, the fact remains that most people are shying away from cloud, though it's a hot topic at the Chitchat Cafe.

The main concern? Fears of security threats and loss of control over systems. Ironically, these were the same concerns we heard when email, the Internet, web services, and instant messaging appeared on the scene. None of those concerns were without merit, but enterprises seem to have adjusted and benefited.

The companies surveyed who had overcome their resistance reported business benefits and are accelerating their use of cloud technologies. Of those companies who have adopted cloud, use it for business applications:
  • Customer relationship management (CRM) -- 50 percent
  • Data storage -- 46 percent
  • Human resources -- 44 percent
Only five percent of companies rely solely on cloud computing. However, of those who do use it at all, more than one third have increased their use of cloud since the economic downturn began in July of 2008.

I expect that trend to continue and accelerate, especially for new companies born in the recession where survival is the mother of invention (and the father of low or nil capital up front costs).

Tuesday, February 24, 2009

Enterprise IT architecture advocacy groups merge to promote wider standards adoption and global member services reach

Enterprise architecture and the goal of aligning business goals with standardized IT best practices took a major step forward with the announcement this week that the Association of Open Group Enterprise Architects (AOGEA) will merge with the Global Enterprise Architects Organization (GEAO).

The two groups will operate under their own names for the time being, but their combined efforts will be administered by The Open Group, a vendor- and technology-neutral consortium that recently published the latest version of it's architectural framework, TOGAF 9. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

The goal of the merger is to offer the 9,000 combined members opportunities for certification and to establish standards for excellence. The Open Group currently offers its IT Architect Certification (ITAC), as well as ongoing advocacy and education services, as well as peer networking opportunities.

I've long been a believer that architecture is destiny, and that aligning business goals with IT initiatives is made more critical by the current economic situation. Adherence to good architectural principles pays the greatest dividends when IT organizations need to support the business through turbulent times. The ability to react swiftly, securely and to use IT as a business differentiator can mean the difference between make or break for many companies.

According to The Open Group, the combined organization will deliver expanded value to current AOGEA and GEAO members by providing them with access to an increased range of programs and services. For example, AOGEA members will benefit from the GEAO’s programs and content focused on business skills, whereas GEAO members will benefit from the AOGEA’s distinct focus on professional standards and technical excellence.

Allen Brown, The Open Group's president and CEO explained:
“The GEAO’s proven track record in furthering business skills for its members and AOGEA’s emphasis on professional standards and technical excellence will provide expanded value for our joint members, as well as their employers and clients.”
I recently had a series of wide-ranging interviews with officials and members of The Open Group at their 21st Enterprise Architecture Practitioners Conference in San Diego, in which we discussed cloud computing, security, and the effects of the economic decline on the need for proper enterprise architecture.

Thursday, February 19, 2009

Cloud computing aligns with enterprise architecture to make each more useful, say experts

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Read a full transcript
of the discussion.

A panel of experts was assembled earlier this month at The Open Group's Enterprise Cloud Computing Conference in San Diego to examine how cloud computing aligns with enterprise architecture.

The discussion raised the question: What will real enterprises need to do to gain savings and productivity in the coming years to exploit cloud computing resources and methods. In essence, this becomes a discussion about real-world cloud computing.

To gain deeper insights into how IT architects can bring cloud computing benefits to their businesses, I queried panelists Lauren States, vice president in IBM's Software Group; Russ Daniels, vice president and CTO Cloud Services Strategy at Hewlett-Packard, and David Linthicum, founder of Blue Mountain Labs.

Here are some excerpts:
Linthicum: You need to assess your existing architecture. Cloud computing is not going to be a mechanism to fix architecture. It’s a mechanism as a solution pattern for architecture. So, you need to do a self-assessment as to what's working, and what's not working within your own enterprise, before you start tossing things outside of the firewall onto the platform in the cloud.

Once you do that, you need to have a good data-level understanding, process-level understanding, and a service-level understanding of the domain. Then, try to figure out exactly which processes, services, information are good candidates for cloud computing.

... Not everything is applicable for cloud computing. In fact, 50 percent of the applications that I look at are not good candidates for cloud. You need to consider that in the context of the hype.

States: ... The other aspect that's really important is the organizational governance and culture part of it, which is true for anything. It's particularly true for us in IT, because sometimes we see the promise of the technology, but we forget about people.

In clients I've been working with, there have been discussions around, "How does this affect operations? Can we change processes? What about the work flows? Will people accept the changes in their jobs? Will the organization be able to absorb the technology? "

Enterprise architecture is robust enough to combine not only the technology but the business processes, the best practices, and methodologies required to make this further journey to take advantage of what technology has to offer.

Daniels: It's very easy to start with technology and then try to view the technology itself as a solution. It's probably not the best place to start. It's a whole lot more useful if you start with the business concerns. What are you trying to accomplish for the business? Then, select from the various models the best way to meet those kinds of needs.

When you think about the concept of, "I want to be able to get the economies of the cloud -- there is this new model that allows me to deliver compute capacity at much lower cost," we think that it's important to understand where those economics really come from and what underlies them. It's not simply that you can pay for infrastructure on demand, but it has a lot to do with the way the software workload itself is designed.

There's a huge economic value ... if the software can take advantage of horizontal scaling -- if you can add compute capacity easily in a commodity environment to be able to meet demand, and then remove the capacity and use it for another purpose when the demand subsides.

... There's a particular class of services, needs for the business, that when you try to address them in the traditional application-centric models, many of those projects are too expensive to start or they tend to be so complex that they fail. Those are the ones where [cloud computing] is particularly worthwhile to consider, "Could I do these more effectively, with a higher value to the business and with better results, if I were to shift to a cloud-based approach, rather than a traditional IT delivery model?"

It's really a question of whether there are things that the business needs that, every time we try to do them in the traditional way, they fail, under deliver, were too slow, or don't satisfy the real business needs. Those are the ones where it's worthwhile taking a look and saying, "What if we were to use cloud to do them?"

Linthicum: Lots of my clients are building what I call rogue clouds. In other words, without any kind of sponsorship from the IT department, they're going out there to Google App Engine. They're building these huge Python applications and deploying them as a mechanism to solve some kind of a tactical business need that they have.

Well, they didn't factor in maintenance, and right now, they're going back to the IT group asking for forgiveness and trying to incorporate that application into the infrastructure. Of course, they don't do Python in IT. They have security issues around all kinds of things, and the application ends up going away. All that effort was for naught.

You need to work with your corporate infrastructure and you need to work under the domain of corporate governance. You need to understand the common policy and the common strategy that the corporation has and adhere to it. That's how you move to cloud computing.

States: The ROI that we've done so far for one of our internal clouds, which is our technology adoption program, providing compute resources and services to our technical community so that they can innovate, has actually had unbelievable ROI -- 83 percent reduction in cost and less than 90-day payback.

We're now calibrating this with other clients who are typically starting with their application test and development workloads, which are good environments because there is a lot of efficiency to be had there. They can experiment with elasticity of capacity, and it's not production, so it doesn't carry the same risk.

Daniels: Our view is that the real benefits, the real significant cost savings that can be gained. If you simply apply virtualization and automation technologies, you can get a significant reduction of cost. Again, self-service delivery can have a huge internal impact. But, a much larger savings can be done, if you can restructure the software itself so that it can be delivered and amortized across a much larger user base.

There is a class of workloads where you can see orders-of-magnitudes decreases in cost, but it requires competencies, and first requires the ownership of the intellectual property. If you depend upon some third-party for the capability, then you can't get those benefits until that third-party goes through the work to realize it for you.

Very simply, the cloud represents new design opportunities, and the reason that enterprise architecture is so fundamental to the success of enterprises is the role that design plays in the success of the enterprise.

The cloud adds a new expressiveness, but imagining that the technology just makes it all better is silly. You really have to think about, what are the problems you're trying to solve, where a design approach exploiting the cloud generates real benefits.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on security trends and needs

Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information


TOGAF 9 Commercial Licensing program information

Tuesday, February 17, 2009

LogLogic delivers integrated suite for securely managing enterprise-wide log data

Companies faced with a tsunami of regulations and compliance requirements could soon find themselves drowning in a sea of log data from their IT systems. LogLogic, the log management provider, today threw these companies a lifeline with a suite of products that form an integrated solution for dealing with audits, compliance, and threats.

The San Jose, Calif. company announced the current and upcoming availability of LogLogic Compliance Manager, LogLogic Security Event Manager, and LogLogic Database Security Manager. [Disclosure: LogLogic is a sponsor of BriefingsDirect podcasts.]

A typical data center nowadays generates more than a terabyte of log data per day, according to LogLogic. With requirements to archive this data for seven years, a printed version could stretch to the moon and back 10 times. LogLogic's new offerings are designed to aid companies in collecting, storing, and analyzing this growing trove of systems operational data.

Compliance Manager helps automate compliance-approval workflows and review tracking, translating "compliance speak" into more plain language. It also maps compliance reports to specific regulatory control objectives, helps automate the business process associated with compliance review and provides a dashboard overview with an at-a-glance scorecard of an organization's current position.

Security Event Manager, powered by LogLogic partner Exaprotect, performs complex event correlation, threat detection, and security incident management workflow, either across a department or the entire enterprise.

LogLogic's partner Exaprotect, Mountain View, Calif., is a provider of enterprise security management for organizations with large-scale, heterogeneous infrastructures.

The LogLogic combined solution analyzes thousands of events in near real time from security devices, operating systems, databases, and applications and can uncover and prioritize mission-critical security events.

Database Security Manager monitors privileged-user activities and protected data stored within database systems. With granular, policy-based detection, integrated prevention, and real-time virtual patch capabilities, security analysts can independently monitor privileged users and enforce segregation of duties without impacting database performance.

Because of the integrated nature of the products, information can be shared across the log management system. For example, database security events can be send to Compliance Manager for review or to the Security Event Manager for prioritization and escalation.

What intrigues me about log data management is the increased role it will play in governance of services, workflow and business processes -- both inside and outside of an organization's boundaries. Precious few resources exist to correlate the behavior of business services with underlying systems.

By making certain log data available to more players in a distributed business process, the easier it is to detect and provide root cause analysis of faults. The governance benefit can work in a two-way street basis, too. As SLAs and other higher-order governance capabilities point to a need for infrastructure adjustments, the logs data trail offer insight and verification.

In short, managed log data is an essential ingrediant to any services lifecycle management and governance capability. The lifecycle approach becomes more critical as cloud computing, virtualization, SOA, and CEP grow more common and imortant.

Lastly, thanks to such technologies as MapReduce, the ability to scour huge quantities of systems log data fast and furious with "BI for IT" depth benefits -- at a managed cost -- becomes attainable. I expect to see more of these "BI for IT" benefits to be applied to more problems of complexity and governance over the coming years. The cost-benefit analysis is a no-brainer.

Security Event Manager is available immediately. Compliance manager is available to early adopters immediately and will be generally available in March. Database Security Manager will be available in the second quarter of this year.

More information on the new products is available LogLogic's screen casts at http://www.loglogic.com/logpower.

Saturday, February 14, 2009

Effective enterprise security begins and ends with architectural best practices approach

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Read a full transcript of the discussion.

The Open Group, a vendor- and technology-neutral consortium, in February held its first Security Practitioners Conference in San Diego. A panel of experts was assembled at the conference to examine how enterprise security intersects with enterprise architecture.

Aligning the two deepens the security protection across more planning- and architectural-level activities, to make security pervasive -- and certainly not an afterthought.

To gain deeper insights into how IT architects can bring security and reduced risk to businesses, I queried panelists Chenxi Wang, principal analyst for security and risk management at Forrester Research; Kristin Lovejoy, director of corporate security strategy at IBM; Nils Puhlmann, chief security officer and vice president of risk management of Qualys, and Jim Hietala, vice president of security for The Open Group.

Here are some excerpts:
In a down economy, like we have today, a lot of organizations are adopting new technologies, such as Web 2.0, service-oriented architecture (SOA)-style applications, and virtualization.

... They are doing it because of the economy of scale that you can get from those technologies. The problem is that these new technologies don't necessarily have the same security constructs built in.

Take Web 2.0 and SOA-style composite applications, for example. The problem with composite applications is that, as we're building these composite applications, we don't know the source of the widget. We don't know whether these applications have been built with good secured design. In the long-term, that becomes problematic for the organizations that use them.

It's the same with virtualization. There hasn't been a lot of thought put to what it means to secure a virtual system. There are not a lot of best practices out there. There are not a lot of industry standards we can adhere to. The IT general control frameworks don't even point to what you need to do from a virtualization perspective.

In a down economy, it's not simply the fact we have to worry about privileged users and our employees ... We also have to worry about these new technologies that we're adapting to become more agile as a business.

There's a whole set of security issues related to cloud computing -- things like compliance and regulation, for example. If you're an organization that is subject to things like the payment card industry data security standard (PCI DSS) or some of the banking regulations in the United States, are there certain applications and certain kinds of data that you will be able to put in a cloud? Maybe. Are there ones that you probably can't put in the cloud today, because you can't get visibility into the control environment that the cloud service provider has? Probably.

There's a whole set of issues related to security compliance and risk management that have to do with cloud services.

We need to shift the way we think about cloud computing. There is a lot of fear out there. It reminds me of 10 years back, when we talked about remote access into companies, VPN, and things like that. People were very fearful and said, "No way. We won't allow this." Now is the time for us to think about cloud computing. If it's done right and by a provider doing all the right things around security, would it be better or worse than it is today?

I'd argue it would be better, because you deal with somebody whose business relies on doing the right thing, versus a lot of processes and a lot of system issues.

Organizations want, at all costs, to avoid plowing ahead with architectures, not considering security upfront, and dealing with the consequence of that. You could probably point to some of the recent breaches and draw the conclusion that maybe that's what happened.

Security to me is always a part of quality. When the quality falls down in IT operations, you normally see security issues popping up. We have to realize that the malicious potential and the effort put in by some of the groups behind these recent breaches are going up. It has to do with resources becoming cheaper, with the knowledge being freely available in the market. This is now on a large scale.

In order to keep up with this we need at least minimum best practices. Somebody mentioned earlier, the worm outbreak, which really was enabled by a vulnerability that was quite old. That just points out that a lot of companies are not doing what they could do easily.

Enterprise architecture is the cornerstone of making security simpler and therefore more effective. The more you can plan, simplify structures, and build in security from the get-go, the more bang you get for the buck.

It's just like building a house. If you don't think about security, you have to add it later, and that will be very expensive. If it's part of the original design, then the things you need to do to secure it at the end will be very minimal. Plus, any changes down the road will also be easier from a security point of view, because you built for it, designed for it, and most important, you're aware of what you have.

Most large enterprises today struggle even to know what architecture they have. In many cases, they don't even know what they have. The trend we see here with architecture and security moving closer together is a trend we have seen in software development as well. It was always an afterthought, and eventually somebody made a calculation and said, "This is really expensive, and we need to build it in."

What we're seeing from a macro perspective is that the IT function within large enterprises is changing. It's undergoing this radical transformation, where the CSO/CISO is becoming a consultant to the business. The CSO/CISO is recognizing, from an operational risk perspective, what could potentially happen to the business, then designing the policies, the processes, and the architectural principles that need to be baked in, pushing them into the operational organization.

From an IT perspective, it's the individuals who are managing the software development release process, the people that are managing the changing configuration management process. Those are the guys that really now hold the keys to the kingdom, so to speak.

... My hope is that security and operations become much more aligned. It's hard to distinguish today between operations and security. So many of the functions overlap. I'll ask you again, changing configuration management, software development and release, why is that not security? From my perspective, I'd like to see those two functions melding.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on cloud computing and enterprise architecture


Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information


TOGAF 9 Commercial Licensing program information

Friday, February 13, 2009

Interview: Guillaume Nodet and Adrian Trenaman on Apache ServiceMix and role of OSGi in OSS clouds

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Progress Software.

Read a full transcript
of the discussion.

Apache Software Foundation open source projects, OSGi, service-oriented architecture (SOA) developments, and cloud computing trends are converging. The do more for less mandate of the day is accelerating interest in how these open source technologies and deployment models can work well together.

As SOA and open-source projects have already collided, and as OSGi is gaining favor as the container model de jour, it makes a great deal of sense to apply these software advances to the need for higher productivity and lower total costs on the business side. Open source on-premises enterprise clouds that can interact well with open standards-oriented third-party clouds may well become the de facto boundaryless services fabric approach. [Access other FUSE community podcasts.]

To discern how open source infrastructure trends saddle up to the private cloud hub-bub, I recently talked with some thought leaders and community development leaders to assess the possible patterns of adoption. I interviewed Guillaume Nodet, software architect at Progress Software and vice president of Apache ServiceMix at Apache, and Adrian Trenaman, distinguished consultant at Progress Software.

Here are some excerpts:
Trenaman: I think open source becomes a very natural and desirable approach in terms of the technologies that you are going to use in terms of accessing the cloud and actually implementing services on the cloud. Then, in order to get those services there in the first place, SOA is pivotal. The best practices and designs that we got from the years we have been doing SOA certainly come into play there.

Certainly, you could always see the ESBs being sort of on the periphery of the cloud, getting data in and out. That's a clear use case. There is something a little sweeter, though, about Apache ServiceMix, particularly ServiceMix 4.0, because it's absolutely geared for dynamic provisioning.

You can imagine having an instance of ServiceMix 4.0 that you know is maybe just an image that you are running on several virtual machines. The first thing it does is contact a grid controller and says, “Well, okay, what bundles do you want me to deploy?” That means we can actually have the grid controller farming out particular applications to the containers that are available.

If a container goes down, then the grid controller will restart applications or bundles on different computing resources. With OSGi at the core of ServiceMix, at the core of the ESB, that’s a step forward now in terms of dynamic provisioning and really like an autonomous competing infrastructure.

... For me, what the OSGi gives us is clearly a much better plug-in framework, into which we can drop value-added services and into which we can extend. I think the OSGi framework is great for that, as well as in terms of management, maybe moving toward grid computing. The stuff that we get from OSGi allowed us to be far more dynamic in the way we provision services.

Nodet: Another thing I just want to add about ServiceMix 4.0, complementing what Adrian just said, is that ServiceMix split into several sub-projects. One of them is ServiceMix Kernel, which is an OSGi enhanced runtime that can be used for provisioning education, and this container is able to deploy virtually any kind of abstract. So, it can support Web applications, and it can support JBI abstracts, because the JBI container is reusing it, but you can really deploy anything that you want.

So, this piece of software can really be leveraged in cloud infrastructure by virtually deploying any application that you want. It could be plain Web services without using an ESB if you don’t have such a need. So it's really pervasive.

... ServiceMix has long been a way that you can distribute your SOA artifacts. ServiceMix is an ESB and by nature, it can be distributed, so it's really easy to start several instances of ServiceMix and make them seamlessly talk together in a high availability way.

The thing that you do not really see yet is all the management and all the monitoring stuff that is needed when you deploy in such an architecture. So ServiceMix can really be used readily to fulfill the core infrastructure.

ServiceMix itself does not aim at providing all the management tools that you could find from either commercial vendors or even open-source. So, on this particular topic, ServiceMix, backed by Progress, is bringing a lot of value to our customers. Progress now has the ability to provide such software.

Trenamen: We recently finished a project in mobile health, where we used ServiceMix to take information from a government health backbone, using HL7 formatted messages, and get that information onto the PDAs of the health-care officials like doctors and nurses. So this is a really, really interesting use case in the healthcare arena, where we’ve got ServiceMix in deployment.

It’s used in a number of cases as well for financial messaging. Recently, I was working with a customer, who hoped to use ServiceMix to route messages between central securities depositories, so they were using SWIFT messages over ServiceMix. We’re getting to see a really nice uptake of new users in new areas, but we also have lots of battle-hardened deployments now in production.

... OSGi is the top of the art, in terms of deployment. It really is what we’ve all wanted for years. I’ve lost enough follicles on my head fixing class-path issues and that kind of class-path hell.

OSGi gives us a badly needed packaging system and a component-based modular deployment system for Java. It piles in some really neat features in terms of life cycle -- being able to start and shut down services, define dependencies between services and between deployment bundles, and also then to do versioning as well.

The ability to have multiple versions of the same service in the same JVM with no class-path conflicts is a massive success. What OSGi really does is clean up the air in terms of Java deployment and Java modularity. So, for me, it's an absolute no-brainer, and I have seen customers who have led the charge on this. This modular framework is not necessarily something that the industry is pushing on the consumers. The consumers are actually pulling us along.

I have worked with customers who have been using OSGi for the last year-and-a-half or two years, and they are making great strides in terms of making their application architecture clean and modular and very easy and flexible to deploy. So, I’ve seen a lot of goodness come out of OSGi and the enterprise.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Progress Software.

Thursday, February 12, 2009

WSO2 announces componentized framework for expansive SOA deployment and integration

A full and componentized service-oriented architecture (SOA) framework is the latest offering from WSO2, the open-source SOA platform provider.

The Mountain View, Calif. company has announced the general availability of WSO2 Carbon, which will allow users to deploy only the components they need and simplify middleware integration. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

It's amazing to me that this amount of open source SOA and web development and deployment technology is available in open source. It's really an impressive feat, with many parties around the world responsible, to produce so much code in a fairly brief time. Congrats to the effort, and the whole Apache model.

Built on the increasingly popular OSGi specification, the framework is accompanied by four related products:
  • WSO2 Web Services Application Server (WSAS) 3.0
  • WSO2 Enterprise Service Bus (ESB 2.0)
  • WSO2 Registry
  • WSO2 Business Process Server (BPS)
The Carbon framework provides such enterprise capabilities as management, security, clustering, logging, statistics, and tracing. Also included is a "try it" testing function. Developers can deploy, manage, and view services from a graphical unified management console.

The componentized OSS platform changes the way developers implement SOA middleware. They no longer need to download both the WSAS and ESB as separate products. They can, for example, start with the ESB, which includes the framework, and then add the other functionality as components.

The components of the Carbon platform are based on Apache Software Foundation projects, including Apache ODE, Axis2, Synapse, Tomcat, Axiom, among many core libraries. Other key features include:
  • Full registry/repository integration that allows a complete distributed Carbon fabric to be driven from a central WSO2 Registry instance.
  • Eventing support, including a WS-Eventing Broker, to support event driven architectures (EDA).
  • WS-Policy Editor for defining Web service dependencies and other attributes.
  • Transactional support for JMS and JDBC, facilitating error handling for services and ESB flows.
  • Transport management control for all services.
  • Active Directory and LDAP support across all products, providing integration into existing user stores including Microsoft environments.
WSAS 3.0 offers enhanced flexibility for configuring SOAs. Developers can separate the administration console logic from the service-hosting engine of WSAS 3.0, making it possible to use a single front-end server to administer several back-end servers simultaneously.

Other enhancements in the WSAS 3.0 are:
  • XSLT-to-XQuery transformation for Java and Data Services
  • Enhanced administration user interface,
  • WS-Policy Editor to configure services using the W3C standard.
  • Improved support for Microsoft Active Directory allowing administrators to integrate WSAS into existing user management infrastructure.
ESB 2.0 allows developers to plug in extra components to handle tasks like service hosting, business process management and SOA governance without disrupting existing flows and configuration. Developers can also separate the management console logic from the ESB routing and transformation engine of the ESB 2.0, making it possible to use a single front-end management console to administer several back-end ESB instances simultaneously.

Other key features of the WSO2 ESB 2.0 include:
  • Enhanced sequence designer, which lets users develop ESB flow logic using a wide variety of built-in mediators, as well as customer provided code.
  • An enhanced proxy service wizard, which provides the ability to create a robust proxy service using simple editors to configure the behavior.
  • Support for events
  • A new security management wizard.
Registry 2.0 includes significant improvements to the publication and management of WSDL-based services. It lets users define custom lifecycles with conditional state transitions. Additionally, it offers well-defined extension points for a flexible, plug-in approach to linking resources and allowing users to encode their own governance rules and polices.

WSO2 BPS, powered by the Apache Ode BPEL engine, provides a full BPEL runtime, deploys business processes written following the WS-BPEL 2.0 and BPEL4WS 1.1 standards, and manages BPEL packages, processes and instances. Other key features include:
  • Eclipse BPEL support, including the ability to work with Eclipse BPEL tooling and the availability of a plug-in to deploy Eclipse-developed processes in WSO2 BPS.
  • Caching and throttling support for business processes to ensure optimal performance and availability.
  • Shutdown/restart support, which allows the administrator to suspend, resume and terminate processes.
  • Transport management allowing simple configuration of JMS, Mail, File and HTTP transports.
  • Full security via the core Carbon framework, including authentication and authorization, with full support for WS-Trust, WS-Security and WS-SecureConversation.
Four products based on Carbon are available for download today from http://wso2.com: the WSAS 3.0, ESB 2.0, WSO2 Registry 2.0, and the new WSO2 Business Process Server 1.0. Developers need to download one of the four products in order to get the core Carbon framework and unified management console that drive all of the components.

Individual components will be available within one month, allowing developers to simply add new capabilities to any of the core products as needed. Componentized versions of the WSO2 Mashup Server and WSO2 Data Services are expected to roll out in mid-2009.

Incidentally, in October, a new data services offering arrived from WSO2 that allows a database administrator (DBA) or anyone with a knowledge of SQL to access enterprise data and expose it to services and operations through a Web services application-programming interface (API).