Tuesday, February 24, 2009

Enterprise IT architecture advocacy groups merge to promote wider standards adoption and global member services reach

Enterprise architecture and the goal of aligning business goals with standardized IT best practices took a major step forward with the announcement this week that the Association of Open Group Enterprise Architects (AOGEA) will merge with the Global Enterprise Architects Organization (GEAO).

The two groups will operate under their own names for the time being, but their combined efforts will be administered by The Open Group, a vendor- and technology-neutral consortium that recently published the latest version of it's architectural framework, TOGAF 9. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

The goal of the merger is to offer the 9,000 combined members opportunities for certification and to establish standards for excellence. The Open Group currently offers its IT Architect Certification (ITAC), as well as ongoing advocacy and education services, as well as peer networking opportunities.

I've long been a believer that architecture is destiny, and that aligning business goals with IT initiatives is made more critical by the current economic situation. Adherence to good architectural principles pays the greatest dividends when IT organizations need to support the business through turbulent times. The ability to react swiftly, securely and to use IT as a business differentiator can mean the difference between make or break for many companies.

According to The Open Group, the combined organization will deliver expanded value to current AOGEA and GEAO members by providing them with access to an increased range of programs and services. For example, AOGEA members will benefit from the GEAO’s programs and content focused on business skills, whereas GEAO members will benefit from the AOGEA’s distinct focus on professional standards and technical excellence.

Allen Brown, The Open Group's president and CEO explained:
“The GEAO’s proven track record in furthering business skills for its members and AOGEA’s emphasis on professional standards and technical excellence will provide expanded value for our joint members, as well as their employers and clients.”
I recently had a series of wide-ranging interviews with officials and members of The Open Group at their 21st Enterprise Architecture Practitioners Conference in San Diego, in which we discussed cloud computing, security, and the effects of the economic decline on the need for proper enterprise architecture.

Thursday, February 19, 2009

Cloud computing aligns with enterprise architecture to make each more useful, say experts

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Read a full transcript
of the discussion.

A panel of experts was assembled earlier this month at The Open Group's Enterprise Cloud Computing Conference in San Diego to examine how cloud computing aligns with enterprise architecture.

The discussion raised the question: What will real enterprises need to do to gain savings and productivity in the coming years to exploit cloud computing resources and methods. In essence, this becomes a discussion about real-world cloud computing.

To gain deeper insights into how IT architects can bring cloud computing benefits to their businesses, I queried panelists Lauren States, vice president in IBM's Software Group; Russ Daniels, vice president and CTO Cloud Services Strategy at Hewlett-Packard, and David Linthicum, founder of Blue Mountain Labs.

Here are some excerpts:
Linthicum: You need to assess your existing architecture. Cloud computing is not going to be a mechanism to fix architecture. It’s a mechanism as a solution pattern for architecture. So, you need to do a self-assessment as to what's working, and what's not working within your own enterprise, before you start tossing things outside of the firewall onto the platform in the cloud.

Once you do that, you need to have a good data-level understanding, process-level understanding, and a service-level understanding of the domain. Then, try to figure out exactly which processes, services, information are good candidates for cloud computing.

... Not everything is applicable for cloud computing. In fact, 50 percent of the applications that I look at are not good candidates for cloud. You need to consider that in the context of the hype.

States: ... The other aspect that's really important is the organizational governance and culture part of it, which is true for anything. It's particularly true for us in IT, because sometimes we see the promise of the technology, but we forget about people.

In clients I've been working with, there have been discussions around, "How does this affect operations? Can we change processes? What about the work flows? Will people accept the changes in their jobs? Will the organization be able to absorb the technology? "

Enterprise architecture is robust enough to combine not only the technology but the business processes, the best practices, and methodologies required to make this further journey to take advantage of what technology has to offer.

Daniels: It's very easy to start with technology and then try to view the technology itself as a solution. It's probably not the best place to start. It's a whole lot more useful if you start with the business concerns. What are you trying to accomplish for the business? Then, select from the various models the best way to meet those kinds of needs.

When you think about the concept of, "I want to be able to get the economies of the cloud -- there is this new model that allows me to deliver compute capacity at much lower cost," we think that it's important to understand where those economics really come from and what underlies them. It's not simply that you can pay for infrastructure on demand, but it has a lot to do with the way the software workload itself is designed.

There's a huge economic value ... if the software can take advantage of horizontal scaling -- if you can add compute capacity easily in a commodity environment to be able to meet demand, and then remove the capacity and use it for another purpose when the demand subsides.

... There's a particular class of services, needs for the business, that when you try to address them in the traditional application-centric models, many of those projects are too expensive to start or they tend to be so complex that they fail. Those are the ones where [cloud computing] is particularly worthwhile to consider, "Could I do these more effectively, with a higher value to the business and with better results, if I were to shift to a cloud-based approach, rather than a traditional IT delivery model?"

It's really a question of whether there are things that the business needs that, every time we try to do them in the traditional way, they fail, under deliver, were too slow, or don't satisfy the real business needs. Those are the ones where it's worthwhile taking a look and saying, "What if we were to use cloud to do them?"

Linthicum: Lots of my clients are building what I call rogue clouds. In other words, without any kind of sponsorship from the IT department, they're going out there to Google App Engine. They're building these huge Python applications and deploying them as a mechanism to solve some kind of a tactical business need that they have.

Well, they didn't factor in maintenance, and right now, they're going back to the IT group asking for forgiveness and trying to incorporate that application into the infrastructure. Of course, they don't do Python in IT. They have security issues around all kinds of things, and the application ends up going away. All that effort was for naught.

You need to work with your corporate infrastructure and you need to work under the domain of corporate governance. You need to understand the common policy and the common strategy that the corporation has and adhere to it. That's how you move to cloud computing.

States: The ROI that we've done so far for one of our internal clouds, which is our technology adoption program, providing compute resources and services to our technical community so that they can innovate, has actually had unbelievable ROI -- 83 percent reduction in cost and less than 90-day payback.

We're now calibrating this with other clients who are typically starting with their application test and development workloads, which are good environments because there is a lot of efficiency to be had there. They can experiment with elasticity of capacity, and it's not production, so it doesn't carry the same risk.

Daniels: Our view is that the real benefits, the real significant cost savings that can be gained. If you simply apply virtualization and automation technologies, you can get a significant reduction of cost. Again, self-service delivery can have a huge internal impact. But, a much larger savings can be done, if you can restructure the software itself so that it can be delivered and amortized across a much larger user base.

There is a class of workloads where you can see orders-of-magnitudes decreases in cost, but it requires competencies, and first requires the ownership of the intellectual property. If you depend upon some third-party for the capability, then you can't get those benefits until that third-party goes through the work to realize it for you.

Very simply, the cloud represents new design opportunities, and the reason that enterprise architecture is so fundamental to the success of enterprises is the role that design plays in the success of the enterprise.

The cloud adds a new expressiveness, but imagining that the technology just makes it all better is silly. You really have to think about, what are the problems you're trying to solve, where a design approach exploiting the cloud generates real benefits.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on security trends and needs

Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information


TOGAF 9 Commercial Licensing program information

Tuesday, February 17, 2009

LogLogic delivers integrated suite for securely managing enterprise-wide log data

Companies faced with a tsunami of regulations and compliance requirements could soon find themselves drowning in a sea of log data from their IT systems. LogLogic, the log management provider, today threw these companies a lifeline with a suite of products that form an integrated solution for dealing with audits, compliance, and threats.

The San Jose, Calif. company announced the current and upcoming availability of LogLogic Compliance Manager, LogLogic Security Event Manager, and LogLogic Database Security Manager. [Disclosure: LogLogic is a sponsor of BriefingsDirect podcasts.]

A typical data center nowadays generates more than a terabyte of log data per day, according to LogLogic. With requirements to archive this data for seven years, a printed version could stretch to the moon and back 10 times. LogLogic's new offerings are designed to aid companies in collecting, storing, and analyzing this growing trove of systems operational data.

Compliance Manager helps automate compliance-approval workflows and review tracking, translating "compliance speak" into more plain language. It also maps compliance reports to specific regulatory control objectives, helps automate the business process associated with compliance review and provides a dashboard overview with an at-a-glance scorecard of an organization's current position.

Security Event Manager, powered by LogLogic partner Exaprotect, performs complex event correlation, threat detection, and security incident management workflow, either across a department or the entire enterprise.

LogLogic's partner Exaprotect, Mountain View, Calif., is a provider of enterprise security management for organizations with large-scale, heterogeneous infrastructures.

The LogLogic combined solution analyzes thousands of events in near real time from security devices, operating systems, databases, and applications and can uncover and prioritize mission-critical security events.

Database Security Manager monitors privileged-user activities and protected data stored within database systems. With granular, policy-based detection, integrated prevention, and real-time virtual patch capabilities, security analysts can independently monitor privileged users and enforce segregation of duties without impacting database performance.

Because of the integrated nature of the products, information can be shared across the log management system. For example, database security events can be send to Compliance Manager for review or to the Security Event Manager for prioritization and escalation.

What intrigues me about log data management is the increased role it will play in governance of services, workflow and business processes -- both inside and outside of an organization's boundaries. Precious few resources exist to correlate the behavior of business services with underlying systems.

By making certain log data available to more players in a distributed business process, the easier it is to detect and provide root cause analysis of faults. The governance benefit can work in a two-way street basis, too. As SLAs and other higher-order governance capabilities point to a need for infrastructure adjustments, the logs data trail offer insight and verification.

In short, managed log data is an essential ingrediant to any services lifecycle management and governance capability. The lifecycle approach becomes more critical as cloud computing, virtualization, SOA, and CEP grow more common and imortant.

Lastly, thanks to such technologies as MapReduce, the ability to scour huge quantities of systems log data fast and furious with "BI for IT" depth benefits -- at a managed cost -- becomes attainable. I expect to see more of these "BI for IT" benefits to be applied to more problems of complexity and governance over the coming years. The cost-benefit analysis is a no-brainer.

Security Event Manager is available immediately. Compliance manager is available to early adopters immediately and will be generally available in March. Database Security Manager will be available in the second quarter of this year.

More information on the new products is available LogLogic's screen casts at http://www.loglogic.com/logpower.

Saturday, February 14, 2009

Effective enterprise security begins and ends with architectural best practices approach

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Read a full transcript of the discussion.

The Open Group, a vendor- and technology-neutral consortium, in February held its first Security Practitioners Conference in San Diego. A panel of experts was assembled at the conference to examine how enterprise security intersects with enterprise architecture.

Aligning the two deepens the security protection across more planning- and architectural-level activities, to make security pervasive -- and certainly not an afterthought.

To gain deeper insights into how IT architects can bring security and reduced risk to businesses, I queried panelists Chenxi Wang, principal analyst for security and risk management at Forrester Research; Kristin Lovejoy, director of corporate security strategy at IBM; Nils Puhlmann, chief security officer and vice president of risk management of Qualys, and Jim Hietala, vice president of security for The Open Group.

Here are some excerpts:
In a down economy, like we have today, a lot of organizations are adopting new technologies, such as Web 2.0, service-oriented architecture (SOA)-style applications, and virtualization.

... They are doing it because of the economy of scale that you can get from those technologies. The problem is that these new technologies don't necessarily have the same security constructs built in.

Take Web 2.0 and SOA-style composite applications, for example. The problem with composite applications is that, as we're building these composite applications, we don't know the source of the widget. We don't know whether these applications have been built with good secured design. In the long-term, that becomes problematic for the organizations that use them.

It's the same with virtualization. There hasn't been a lot of thought put to what it means to secure a virtual system. There are not a lot of best practices out there. There are not a lot of industry standards we can adhere to. The IT general control frameworks don't even point to what you need to do from a virtualization perspective.

In a down economy, it's not simply the fact we have to worry about privileged users and our employees ... We also have to worry about these new technologies that we're adapting to become more agile as a business.

There's a whole set of security issues related to cloud computing -- things like compliance and regulation, for example. If you're an organization that is subject to things like the payment card industry data security standard (PCI DSS) or some of the banking regulations in the United States, are there certain applications and certain kinds of data that you will be able to put in a cloud? Maybe. Are there ones that you probably can't put in the cloud today, because you can't get visibility into the control environment that the cloud service provider has? Probably.

There's a whole set of issues related to security compliance and risk management that have to do with cloud services.

We need to shift the way we think about cloud computing. There is a lot of fear out there. It reminds me of 10 years back, when we talked about remote access into companies, VPN, and things like that. People were very fearful and said, "No way. We won't allow this." Now is the time for us to think about cloud computing. If it's done right and by a provider doing all the right things around security, would it be better or worse than it is today?

I'd argue it would be better, because you deal with somebody whose business relies on doing the right thing, versus a lot of processes and a lot of system issues.

Organizations want, at all costs, to avoid plowing ahead with architectures, not considering security upfront, and dealing with the consequence of that. You could probably point to some of the recent breaches and draw the conclusion that maybe that's what happened.

Security to me is always a part of quality. When the quality falls down in IT operations, you normally see security issues popping up. We have to realize that the malicious potential and the effort put in by some of the groups behind these recent breaches are going up. It has to do with resources becoming cheaper, with the knowledge being freely available in the market. This is now on a large scale.

In order to keep up with this we need at least minimum best practices. Somebody mentioned earlier, the worm outbreak, which really was enabled by a vulnerability that was quite old. That just points out that a lot of companies are not doing what they could do easily.

Enterprise architecture is the cornerstone of making security simpler and therefore more effective. The more you can plan, simplify structures, and build in security from the get-go, the more bang you get for the buck.

It's just like building a house. If you don't think about security, you have to add it later, and that will be very expensive. If it's part of the original design, then the things you need to do to secure it at the end will be very minimal. Plus, any changes down the road will also be easier from a security point of view, because you built for it, designed for it, and most important, you're aware of what you have.

Most large enterprises today struggle even to know what architecture they have. In many cases, they don't even know what they have. The trend we see here with architecture and security moving closer together is a trend we have seen in software development as well. It was always an afterthought, and eventually somebody made a calculation and said, "This is really expensive, and we need to build it in."

What we're seeing from a macro perspective is that the IT function within large enterprises is changing. It's undergoing this radical transformation, where the CSO/CISO is becoming a consultant to the business. The CSO/CISO is recognizing, from an operational risk perspective, what could potentially happen to the business, then designing the policies, the processes, and the architectural principles that need to be baked in, pushing them into the operational organization.

From an IT perspective, it's the individuals who are managing the software development release process, the people that are managing the changing configuration management process. Those are the guys that really now hold the keys to the kingdom, so to speak.

... My hope is that security and operations become much more aligned. It's hard to distinguish today between operations and security. So many of the functions overlap. I'll ask you again, changing configuration management, software development and release, why is that not security? From my perspective, I'd like to see those two functions melding.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on cloud computing and enterprise architecture


Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information


TOGAF 9 Commercial Licensing program information

Friday, February 13, 2009

Interview: Guillaume Nodet and Adrian Trenaman on Apache ServiceMix and role of OSGi in OSS clouds

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Progress Software.

Read a full transcript
of the discussion.

Apache Software Foundation open source projects, OSGi, service-oriented architecture (SOA) developments, and cloud computing trends are converging. The do more for less mandate of the day is accelerating interest in how these open source technologies and deployment models can work well together.

As SOA and open-source projects have already collided, and as OSGi is gaining favor as the container model de jour, it makes a great deal of sense to apply these software advances to the need for higher productivity and lower total costs on the business side. Open source on-premises enterprise clouds that can interact well with open standards-oriented third-party clouds may well become the de facto boundaryless services fabric approach. [Access other FUSE community podcasts.]

To discern how open source infrastructure trends saddle up to the private cloud hub-bub, I recently talked with some thought leaders and community development leaders to assess the possible patterns of adoption. I interviewed Guillaume Nodet, software architect at Progress Software and vice president of Apache ServiceMix at Apache, and Adrian Trenaman, distinguished consultant at Progress Software.

Here are some excerpts:
Trenaman: I think open source becomes a very natural and desirable approach in terms of the technologies that you are going to use in terms of accessing the cloud and actually implementing services on the cloud. Then, in order to get those services there in the first place, SOA is pivotal. The best practices and designs that we got from the years we have been doing SOA certainly come into play there.

Certainly, you could always see the ESBs being sort of on the periphery of the cloud, getting data in and out. That's a clear use case. There is something a little sweeter, though, about Apache ServiceMix, particularly ServiceMix 4.0, because it's absolutely geared for dynamic provisioning.

You can imagine having an instance of ServiceMix 4.0 that you know is maybe just an image that you are running on several virtual machines. The first thing it does is contact a grid controller and says, “Well, okay, what bundles do you want me to deploy?” That means we can actually have the grid controller farming out particular applications to the containers that are available.

If a container goes down, then the grid controller will restart applications or bundles on different computing resources. With OSGi at the core of ServiceMix, at the core of the ESB, that’s a step forward now in terms of dynamic provisioning and really like an autonomous competing infrastructure.

... For me, what the OSGi gives us is clearly a much better plug-in framework, into which we can drop value-added services and into which we can extend. I think the OSGi framework is great for that, as well as in terms of management, maybe moving toward grid computing. The stuff that we get from OSGi allowed us to be far more dynamic in the way we provision services.

Nodet: Another thing I just want to add about ServiceMix 4.0, complementing what Adrian just said, is that ServiceMix split into several sub-projects. One of them is ServiceMix Kernel, which is an OSGi enhanced runtime that can be used for provisioning education, and this container is able to deploy virtually any kind of abstract. So, it can support Web applications, and it can support JBI abstracts, because the JBI container is reusing it, but you can really deploy anything that you want.

So, this piece of software can really be leveraged in cloud infrastructure by virtually deploying any application that you want. It could be plain Web services without using an ESB if you don’t have such a need. So it's really pervasive.

... ServiceMix has long been a way that you can distribute your SOA artifacts. ServiceMix is an ESB and by nature, it can be distributed, so it's really easy to start several instances of ServiceMix and make them seamlessly talk together in a high availability way.

The thing that you do not really see yet is all the management and all the monitoring stuff that is needed when you deploy in such an architecture. So ServiceMix can really be used readily to fulfill the core infrastructure.

ServiceMix itself does not aim at providing all the management tools that you could find from either commercial vendors or even open-source. So, on this particular topic, ServiceMix, backed by Progress, is bringing a lot of value to our customers. Progress now has the ability to provide such software.

Trenamen: We recently finished a project in mobile health, where we used ServiceMix to take information from a government health backbone, using HL7 formatted messages, and get that information onto the PDAs of the health-care officials like doctors and nurses. So this is a really, really interesting use case in the healthcare arena, where we’ve got ServiceMix in deployment.

It’s used in a number of cases as well for financial messaging. Recently, I was working with a customer, who hoped to use ServiceMix to route messages between central securities depositories, so they were using SWIFT messages over ServiceMix. We’re getting to see a really nice uptake of new users in new areas, but we also have lots of battle-hardened deployments now in production.

... OSGi is the top of the art, in terms of deployment. It really is what we’ve all wanted for years. I’ve lost enough follicles on my head fixing class-path issues and that kind of class-path hell.

OSGi gives us a badly needed packaging system and a component-based modular deployment system for Java. It piles in some really neat features in terms of life cycle -- being able to start and shut down services, define dependencies between services and between deployment bundles, and also then to do versioning as well.

The ability to have multiple versions of the same service in the same JVM with no class-path conflicts is a massive success. What OSGi really does is clean up the air in terms of Java deployment and Java modularity. So, for me, it's an absolute no-brainer, and I have seen customers who have led the charge on this. This modular framework is not necessarily something that the industry is pushing on the consumers. The consumers are actually pulling us along.

I have worked with customers who have been using OSGi for the last year-and-a-half or two years, and they are making great strides in terms of making their application architecture clean and modular and very easy and flexible to deploy. So, I’ve seen a lot of goodness come out of OSGi and the enterprise.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Progress Software.

Thursday, February 12, 2009

WSO2 announces componentized framework for expansive SOA deployment and integration

A full and componentized service-oriented architecture (SOA) framework is the latest offering from WSO2, the open-source SOA platform provider.

The Mountain View, Calif. company has announced the general availability of WSO2 Carbon, which will allow users to deploy only the components they need and simplify middleware integration. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

It's amazing to me that this amount of open source SOA and web development and deployment technology is available in open source. It's really an impressive feat, with many parties around the world responsible, to produce so much code in a fairly brief time. Congrats to the effort, and the whole Apache model.

Built on the increasingly popular OSGi specification, the framework is accompanied by four related products:
  • WSO2 Web Services Application Server (WSAS) 3.0
  • WSO2 Enterprise Service Bus (ESB 2.0)
  • WSO2 Registry
  • WSO2 Business Process Server (BPS)
The Carbon framework provides such enterprise capabilities as management, security, clustering, logging, statistics, and tracing. Also included is a "try it" testing function. Developers can deploy, manage, and view services from a graphical unified management console.

The componentized OSS platform changes the way developers implement SOA middleware. They no longer need to download both the WSAS and ESB as separate products. They can, for example, start with the ESB, which includes the framework, and then add the other functionality as components.

The components of the Carbon platform are based on Apache Software Foundation projects, including Apache ODE, Axis2, Synapse, Tomcat, Axiom, among many core libraries. Other key features include:
  • Full registry/repository integration that allows a complete distributed Carbon fabric to be driven from a central WSO2 Registry instance.
  • Eventing support, including a WS-Eventing Broker, to support event driven architectures (EDA).
  • WS-Policy Editor for defining Web service dependencies and other attributes.
  • Transactional support for JMS and JDBC, facilitating error handling for services and ESB flows.
  • Transport management control for all services.
  • Active Directory and LDAP support across all products, providing integration into existing user stores including Microsoft environments.
WSAS 3.0 offers enhanced flexibility for configuring SOAs. Developers can separate the administration console logic from the service-hosting engine of WSAS 3.0, making it possible to use a single front-end server to administer several back-end servers simultaneously.

Other enhancements in the WSAS 3.0 are:
  • XSLT-to-XQuery transformation for Java and Data Services
  • Enhanced administration user interface,
  • WS-Policy Editor to configure services using the W3C standard.
  • Improved support for Microsoft Active Directory allowing administrators to integrate WSAS into existing user management infrastructure.
ESB 2.0 allows developers to plug in extra components to handle tasks like service hosting, business process management and SOA governance without disrupting existing flows and configuration. Developers can also separate the management console logic from the ESB routing and transformation engine of the ESB 2.0, making it possible to use a single front-end management console to administer several back-end ESB instances simultaneously.

Other key features of the WSO2 ESB 2.0 include:
  • Enhanced sequence designer, which lets users develop ESB flow logic using a wide variety of built-in mediators, as well as customer provided code.
  • An enhanced proxy service wizard, which provides the ability to create a robust proxy service using simple editors to configure the behavior.
  • Support for events
  • A new security management wizard.
Registry 2.0 includes significant improvements to the publication and management of WSDL-based services. It lets users define custom lifecycles with conditional state transitions. Additionally, it offers well-defined extension points for a flexible, plug-in approach to linking resources and allowing users to encode their own governance rules and polices.

WSO2 BPS, powered by the Apache Ode BPEL engine, provides a full BPEL runtime, deploys business processes written following the WS-BPEL 2.0 and BPEL4WS 1.1 standards, and manages BPEL packages, processes and instances. Other key features include:
  • Eclipse BPEL support, including the ability to work with Eclipse BPEL tooling and the availability of a plug-in to deploy Eclipse-developed processes in WSO2 BPS.
  • Caching and throttling support for business processes to ensure optimal performance and availability.
  • Shutdown/restart support, which allows the administrator to suspend, resume and terminate processes.
  • Transport management allowing simple configuration of JMS, Mail, File and HTTP transports.
  • Full security via the core Carbon framework, including authentication and authorization, with full support for WS-Trust, WS-Security and WS-SecureConversation.
Four products based on Carbon are available for download today from http://wso2.com: the WSAS 3.0, ESB 2.0, WSO2 Registry 2.0, and the new WSO2 Business Process Server 1.0. Developers need to download one of the four products in order to get the core Carbon framework and unified management console that drive all of the components.

Individual components will be available within one month, allowing developers to simply add new capabilities to any of the core products as needed. Componentized versions of the WSO2 Mashup Server and WSO2 Data Services are expected to roll out in mid-2009.

Incidentally, in October, a new data services offering arrived from WSO2 that allows a database administrator (DBA) or anyone with a knowledge of SQL to access enterprise data and expose it to services and operations through a Web services application-programming interface (API).

TOGAF 9 advances IT maturity while offering more paths to architecture-level IT improvement

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Read a full transcript of the discussion.

The Open Group, a vendor- and technology-neutral consortium, in early February delivered TOGAF 9, an enterprise architecture framework. TOGAF 9 represents a departure for enterprise architecture frameworks in general.

It's larger, more mature, and modular to allow folks to enter it from a variety of perspectives. It takes on a much more significant business services and accomplishments perspective. [Read more on a panel discussion about the importance of enterprise architecture.]

While IT practitioners and architects will be looking over TOGAF 9 deeply, it’s also going to be of interest to the business side of the enterprise and offers a way for them to understand more about how IT can service their business needs. [I also last week spoke to Allen Brown, the CEO of The Open Group on trends in IT.]

To gain deeper insights into how IT architects can bring value to businesses, I recently interviewed two TOGAF experts, Robert Weisman, CEO and principal consultant for Build The Vision, and Mike Turner, an enterprise architect at Capgemini.

Here are some excerpts:
I can see architecture being an integral part of the business planning process. It structures the business plans and makes sure that the objectives are realizable. In other words, we can use the acronym SMART, specific, measurable, actionable, realizable, and time-bound. What TOGAF 9 does is provide an overarching vision and capability with which to cooperate.

I see architecture as a set of tools and techniques that can help you achieve what you want to do as a business. Taking architecture in isolation is not necessarily going to achieve the right things for your organization, because you actually need to have the direction as an input for architecture to support achievement of a particular outcome.

Architecture is really a vital tool for is being able to assure that the correct business outcome is achieved. You need to have a structured approach to how you define the problem space that businesses are facing, then define the solution space, and define how you move from where you are right now to where you want to be.

TOGAF 9, first of all, is more business focused. Before that it was definitely in the IT realm, and IT was essentially defined as hardware and software. The definition of IT in TOGAF 9 is the lifecycle management of information and related technology within an organization. It puts much more emphasis on the actual information, its access, presentation, and quality, so that it can provide not only transaction processing support, but analytical processing support for critical business decisions.

The gestation took five years. I've been part of the forum for five years working on the TOGAF 9. Part of the challenge was that we had such an incredible take up of TOGAF 8. Once a standard has been taken up, you can’t change it on a dime. You don’t want to change it on a dime, but you want to keep it dynamic, update it, and incorporate best practices. That would explain some of the gestation period. TOGAF 8 was very successful, and to get TOGAF 9 right, it was a little longer cycle, but I think it’s been well worth the wait.

If you look at the industry in general, we're going through a process where the IT industry is maturing and becoming more stable, and change is becoming more incremental in the industry. What you see in architecture frameworks is a cycle of discovery, invention, and then consolidation that follows, as consensus is reached.

One thing that’s really key about TOGAF 9 is that it takes a lot of ideas and practices that exist within individual organizations or proprietary frameworks, building a consensus around it, and releasing it into a public-domain context.

Once that happens, the value you can get from that approach increases exponentially. Now, you're not talking about going to one vendor and having to deal with one particular set of concepts, and then going to a different vendor and having to deal with another set of concepts, and dealing with the interoperability between those.

You're in a situation where the industry agrees this is the way to do things. Suddenly, the economies of scale that you can get from that, as all the participants in the industry starts to converge on that consensus, mean that you get a whole set of new opportunities about how you can use architecture.

TOGAF 9 is, in certain ways, an evolutionary change and in certain other ways a revolutionary change. The architecture development methodology has basically remained similar. However, transforming the architecture from concept into a reality has basically been expanded pretty dramatically, with a great many lessons learned. So, architecture transformation is a large one. Various architectural frameworks have been incorporated into it.

A great many concepts that allow enterprise architecture to be molded with operations management, with system design, portfolio management, business planning, and the Governance Institute's COBIT guidelines and other industry standards have also been incorporated into TOGAF.

Also, there's been a major contribution by such companies as Capgemini, with respect to artifacts and structure. The content meta model is a huge contribution.

The term SOA is old wine in new bottles. It's been around for a long time. If you just have a service catalog, if you have duplicate services, it becomes very evident. That’s one of the advantages of the repositories -- you can have an insight into what you actually have.

TOGAF, from its outset in the early 1990s, has been service oriented for that. Just by applying TOGAF, you have a chance of doing your Gap Analysis, of having the visibility into what you have, which makes it not only efficient, but effective from a business perspective.

TOGAF allows you to understand what makes your business good and then identify what your services are in a way that considers all the different angles. Once that’s defined, you can then put the right technology underneath that to realize what the business is actually looking for. That’s something that can have an absolutely transformational effect on your business.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Reporting on the TOGAF 9 launch

Panel discussion on security trends and needs

Panel discussion on cloud computing and enterprise architecture


Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information


TOGAF 9 Commercial Licensing program information

Who makes most rain from IBM-Amazon cloud deal? Oracle.

It just goes to show how perplexing the gathering cloud marketplace is that the punditry are hesitant to meaningfully analyze this important development.

But Amazon -- with a history of bold and long-term bets -- and IBM -- with a history of making markets whether it's right about the future or not -- have made the best of bedfellows in the deal announced late Wednesday.

Why? Because IBM has taken the cloud bait ... big time. And Amazon has just partnered with the best enterprise IT channel on Earth.

Together they form the irresistible gravitational black hole from which Microsoft can not escape. And Google is building another black hole right next door. And so is Salesforce.com.

Can Microsoft change universal physics? Not likely.

What this deal means is that Microsoft will need to adopt the cloud model all the more quickly and comprehensively -- across its software lines, not just a few. It's going to be Live Stack, not just Live Mesh. It's going to be buy once, run any which way.

It's build your on-premises cloud on IBM and insure against peaks and troughs with the elastic AWS hand-off. No more 20 percent utilization on umpteen licensed servers to guarantee reliability (I'm talking to you, Exchange Server).

And given the IBM license pincher move, the fungible enterprise-AWS licenses scheme will help shrink Microsoft's margins all the faster and deeper as a result. IBM will be selling cloud economics against Microsoft Software Assurance economics with a world-class and hungry sales force. Ouch.

You see, IBM can monetize across more business types -- hardware, storage, professional services, systems integration, infrastructure software, groupware software, specialized outsourcing and applications, and a lot more. Microsoft not so. IBM can adopt the cloud aggressively and find new innovative models from its diversified portfolio. Microsoft is hoping for the best with developers more than the operators, because it has no choice (and Wall Street knows it).

I don't expect Microsoft to do any similar deal with any cloud provider other than Microsoft. IBM, on the other hand, can do similar deals with any cloud provider with the chops to produce the reliable and cost-effective compute fabric that's open to its SuSE Linux stack. The more clouds the better for IBM, while Microsoft will compete against those clouds. It's the MS-DOS license deal in reverse at a higher abstraction.

Amazon, too, can tee up any number of enterprise software providers to channel the humongous global enterprise software market to AWS ... from a tickle to a stream to (who knows?). But that's still good money. And Amazon has a huge and growing lead in the ecumenical cloud department.

So we come to Oracle. Larry Ellison's entertaining position on cloud is a hedge. He knows the substantial cloud economy is inevitable, and he knows its at least 10 years in the making. And he knows the transition will be ugly and bloody.

Best to let those two old antagonists IBM and Microsoft beat the crap out of each other, with Amazon as Burgess Meredith's Mickey Goldmill to IBM's Rocky and Microsoft's Apollo Creed. Then the build, buy or partner decision can be made by Oracle after all the money has been taken from the traditional enterprise data, applications, development, infrastructure and integration model (which has plenty of legs).

It's too soon to tell whether the rainmaker-enabled marketplace approach of IBM (remember Java, Linux, n-tier) will beat out the shoot-for-the-moon strategy of Microsoft when it comes to the cloud. But I like Oracle's margins better through 2016 as the battle ensues.