Tuesday, February 24, 2009

Enterprise IT architecture advocacy groups merge to promote wider standards adoption and global member services reach

Enterprise architecture and the goal of aligning business goals with standardized IT best practices took a major step forward with the announcement this week that the Association of Open Group Enterprise Architects (AOGEA) will merge with the Global Enterprise Architects Organization (GEAO).

The two groups will operate under their own names for the time being, but their combined efforts will be administered by The Open Group, a vendor- and technology-neutral consortium that recently published the latest version of it's architectural framework, TOGAF 9. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

The goal of the merger is to offer the 9,000 combined members opportunities for certification and to establish standards for excellence. The Open Group currently offers its IT Architect Certification (ITAC), as well as ongoing advocacy and education services, as well as peer networking opportunities.

I've long been a believer that architecture is destiny, and that aligning business goals with IT initiatives is made more critical by the current economic situation. Adherence to good architectural principles pays the greatest dividends when IT organizations need to support the business through turbulent times. The ability to react swiftly, securely and to use IT as a business differentiator can mean the difference between make or break for many companies.

According to The Open Group, the combined organization will deliver expanded value to current AOGEA and GEAO members by providing them with access to an increased range of programs and services. For example, AOGEA members will benefit from the GEAO’s programs and content focused on business skills, whereas GEAO members will benefit from the AOGEA’s distinct focus on professional standards and technical excellence.

Allen Brown, The Open Group's president and CEO explained:
“The GEAO’s proven track record in furthering business skills for its members and AOGEA’s emphasis on professional standards and technical excellence will provide expanded value for our joint members, as well as their employers and clients.”
I recently had a series of wide-ranging interviews with officials and members of The Open Group at their 21st Enterprise Architecture Practitioners Conference in San Diego, in which we discussed cloud computing, security, and the effects of the economic decline on the need for proper enterprise architecture.

Thursday, February 19, 2009

Cloud computing aligns with enterprise architecture to make each more useful, say experts

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Read a full transcript
of the discussion.

A panel of experts was assembled earlier this month at The Open Group's Enterprise Cloud Computing Conference in San Diego to examine how cloud computing aligns with enterprise architecture.

The discussion raised the question: What will real enterprises need to do to gain savings and productivity in the coming years to exploit cloud computing resources and methods. In essence, this becomes a discussion about real-world cloud computing.

To gain deeper insights into how IT architects can bring cloud computing benefits to their businesses, I queried panelists Lauren States, vice president in IBM's Software Group; Russ Daniels, vice president and CTO Cloud Services Strategy at Hewlett-Packard, and David Linthicum, founder of Blue Mountain Labs.

Here are some excerpts:
Linthicum: You need to assess your existing architecture. Cloud computing is not going to be a mechanism to fix architecture. It’s a mechanism as a solution pattern for architecture. So, you need to do a self-assessment as to what's working, and what's not working within your own enterprise, before you start tossing things outside of the firewall onto the platform in the cloud.

Once you do that, you need to have a good data-level understanding, process-level understanding, and a service-level understanding of the domain. Then, try to figure out exactly which processes, services, information are good candidates for cloud computing.

... Not everything is applicable for cloud computing. In fact, 50 percent of the applications that I look at are not good candidates for cloud. You need to consider that in the context of the hype.

States: ... The other aspect that's really important is the organizational governance and culture part of it, which is true for anything. It's particularly true for us in IT, because sometimes we see the promise of the technology, but we forget about people.

In clients I've been working with, there have been discussions around, "How does this affect operations? Can we change processes? What about the work flows? Will people accept the changes in their jobs? Will the organization be able to absorb the technology? "

Enterprise architecture is robust enough to combine not only the technology but the business processes, the best practices, and methodologies required to make this further journey to take advantage of what technology has to offer.

Daniels: It's very easy to start with technology and then try to view the technology itself as a solution. It's probably not the best place to start. It's a whole lot more useful if you start with the business concerns. What are you trying to accomplish for the business? Then, select from the various models the best way to meet those kinds of needs.

When you think about the concept of, "I want to be able to get the economies of the cloud -- there is this new model that allows me to deliver compute capacity at much lower cost," we think that it's important to understand where those economics really come from and what underlies them. It's not simply that you can pay for infrastructure on demand, but it has a lot to do with the way the software workload itself is designed.

There's a huge economic value ... if the software can take advantage of horizontal scaling -- if you can add compute capacity easily in a commodity environment to be able to meet demand, and then remove the capacity and use it for another purpose when the demand subsides.

... There's a particular class of services, needs for the business, that when you try to address them in the traditional application-centric models, many of those projects are too expensive to start or they tend to be so complex that they fail. Those are the ones where [cloud computing] is particularly worthwhile to consider, "Could I do these more effectively, with a higher value to the business and with better results, if I were to shift to a cloud-based approach, rather than a traditional IT delivery model?"

It's really a question of whether there are things that the business needs that, every time we try to do them in the traditional way, they fail, under deliver, were too slow, or don't satisfy the real business needs. Those are the ones where it's worthwhile taking a look and saying, "What if we were to use cloud to do them?"

Linthicum: Lots of my clients are building what I call rogue clouds. In other words, without any kind of sponsorship from the IT department, they're going out there to Google App Engine. They're building these huge Python applications and deploying them as a mechanism to solve some kind of a tactical business need that they have.

Well, they didn't factor in maintenance, and right now, they're going back to the IT group asking for forgiveness and trying to incorporate that application into the infrastructure. Of course, they don't do Python in IT. They have security issues around all kinds of things, and the application ends up going away. All that effort was for naught.

You need to work with your corporate infrastructure and you need to work under the domain of corporate governance. You need to understand the common policy and the common strategy that the corporation has and adhere to it. That's how you move to cloud computing.

States: The ROI that we've done so far for one of our internal clouds, which is our technology adoption program, providing compute resources and services to our technical community so that they can innovate, has actually had unbelievable ROI -- 83 percent reduction in cost and less than 90-day payback.

We're now calibrating this with other clients who are typically starting with their application test and development workloads, which are good environments because there is a lot of efficiency to be had there. They can experiment with elasticity of capacity, and it's not production, so it doesn't carry the same risk.

Daniels: Our view is that the real benefits, the real significant cost savings that can be gained. If you simply apply virtualization and automation technologies, you can get a significant reduction of cost. Again, self-service delivery can have a huge internal impact. But, a much larger savings can be done, if you can restructure the software itself so that it can be delivered and amortized across a much larger user base.

There is a class of workloads where you can see orders-of-magnitudes decreases in cost, but it requires competencies, and first requires the ownership of the intellectual property. If you depend upon some third-party for the capability, then you can't get those benefits until that third-party goes through the work to realize it for you.

Very simply, the cloud represents new design opportunities, and the reason that enterprise architecture is so fundamental to the success of enterprises is the role that design plays in the success of the enterprise.

The cloud adds a new expressiveness, but imagining that the technology just makes it all better is silly. You really have to think about, what are the problems you're trying to solve, where a design approach exploiting the cloud generates real benefits.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on security trends and needs

Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information

TOGAF 9 Commercial Licensing program information

Tuesday, February 17, 2009

LogLogic delivers integrated suite for securely managing enterprise-wide log data

Companies faced with a tsunami of regulations and compliance requirements could soon find themselves drowning in a sea of log data from their IT systems. LogLogic, the log management provider, today threw these companies a lifeline with a suite of products that form an integrated solution for dealing with audits, compliance, and threats.

The San Jose, Calif. company announced the current and upcoming availability of LogLogic Compliance Manager, LogLogic Security Event Manager, and LogLogic Database Security Manager. [Disclosure: LogLogic is a sponsor of BriefingsDirect podcasts.]

A typical data center nowadays generates more than a terabyte of log data per day, according to LogLogic. With requirements to archive this data for seven years, a printed version could stretch to the moon and back 10 times. LogLogic's new offerings are designed to aid companies in collecting, storing, and analyzing this growing trove of systems operational data.

Compliance Manager helps automate compliance-approval workflows and review tracking, translating "compliance speak" into more plain language. It also maps compliance reports to specific regulatory control objectives, helps automate the business process associated with compliance review and provides a dashboard overview with an at-a-glance scorecard of an organization's current position.

Security Event Manager, powered by LogLogic partner Exaprotect, performs complex event correlation, threat detection, and security incident management workflow, either across a department or the entire enterprise.

LogLogic's partner Exaprotect, Mountain View, Calif., is a provider of enterprise security management for organizations with large-scale, heterogeneous infrastructures.

The LogLogic combined solution analyzes thousands of events in near real time from security devices, operating systems, databases, and applications and can uncover and prioritize mission-critical security events.

Database Security Manager monitors privileged-user activities and protected data stored within database systems. With granular, policy-based detection, integrated prevention, and real-time virtual patch capabilities, security analysts can independently monitor privileged users and enforce segregation of duties without impacting database performance.

Because of the integrated nature of the products, information can be shared across the log management system. For example, database security events can be send to Compliance Manager for review or to the Security Event Manager for prioritization and escalation.

What intrigues me about log data management is the increased role it will play in governance of services, workflow and business processes -- both inside and outside of an organization's boundaries. Precious few resources exist to correlate the behavior of business services with underlying systems.

By making certain log data available to more players in a distributed business process, the easier it is to detect and provide root cause analysis of faults. The governance benefit can work in a two-way street basis, too. As SLAs and other higher-order governance capabilities point to a need for infrastructure adjustments, the logs data trail offer insight and verification.

In short, managed log data is an essential ingrediant to any services lifecycle management and governance capability. The lifecycle approach becomes more critical as cloud computing, virtualization, SOA, and CEP grow more common and imortant.

Lastly, thanks to such technologies as MapReduce, the ability to scour huge quantities of systems log data fast and furious with "BI for IT" depth benefits -- at a managed cost -- becomes attainable. I expect to see more of these "BI for IT" benefits to be applied to more problems of complexity and governance over the coming years. The cost-benefit analysis is a no-brainer.

Security Event Manager is available immediately. Compliance manager is available to early adopters immediately and will be generally available in March. Database Security Manager will be available in the second quarter of this year.

More information on the new products is available LogLogic's screen casts at http://www.loglogic.com/logpower.

Saturday, February 14, 2009

Effective enterprise security begins and ends with architectural best practices approach

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Read a full transcript of the discussion.

The Open Group, a vendor- and technology-neutral consortium, in February held its first Security Practitioners Conference in San Diego. A panel of experts was assembled at the conference to examine how enterprise security intersects with enterprise architecture.

Aligning the two deepens the security protection across more planning- and architectural-level activities, to make security pervasive -- and certainly not an afterthought.

To gain deeper insights into how IT architects can bring security and reduced risk to businesses, I queried panelists Chenxi Wang, principal analyst for security and risk management at Forrester Research; Kristin Lovejoy, director of corporate security strategy at IBM; Nils Puhlmann, chief security officer and vice president of risk management of Qualys, and Jim Hietala, vice president of security for The Open Group.

Here are some excerpts:
In a down economy, like we have today, a lot of organizations are adopting new technologies, such as Web 2.0, service-oriented architecture (SOA)-style applications, and virtualization.

... They are doing it because of the economy of scale that you can get from those technologies. The problem is that these new technologies don't necessarily have the same security constructs built in.

Take Web 2.0 and SOA-style composite applications, for example. The problem with composite applications is that, as we're building these composite applications, we don't know the source of the widget. We don't know whether these applications have been built with good secured design. In the long-term, that becomes problematic for the organizations that use them.

It's the same with virtualization. There hasn't been a lot of thought put to what it means to secure a virtual system. There are not a lot of best practices out there. There are not a lot of industry standards we can adhere to. The IT general control frameworks don't even point to what you need to do from a virtualization perspective.

In a down economy, it's not simply the fact we have to worry about privileged users and our employees ... We also have to worry about these new technologies that we're adapting to become more agile as a business.

There's a whole set of security issues related to cloud computing -- things like compliance and regulation, for example. If you're an organization that is subject to things like the payment card industry data security standard (PCI DSS) or some of the banking regulations in the United States, are there certain applications and certain kinds of data that you will be able to put in a cloud? Maybe. Are there ones that you probably can't put in the cloud today, because you can't get visibility into the control environment that the cloud service provider has? Probably.

There's a whole set of issues related to security compliance and risk management that have to do with cloud services.

We need to shift the way we think about cloud computing. There is a lot of fear out there. It reminds me of 10 years back, when we talked about remote access into companies, VPN, and things like that. People were very fearful and said, "No way. We won't allow this." Now is the time for us to think about cloud computing. If it's done right and by a provider doing all the right things around security, would it be better or worse than it is today?

I'd argue it would be better, because you deal with somebody whose business relies on doing the right thing, versus a lot of processes and a lot of system issues.

Organizations want, at all costs, to avoid plowing ahead with architectures, not considering security upfront, and dealing with the consequence of that. You could probably point to some of the recent breaches and draw the conclusion that maybe that's what happened.

Security to me is always a part of quality. When the quality falls down in IT operations, you normally see security issues popping up. We have to realize that the malicious potential and the effort put in by some of the groups behind these recent breaches are going up. It has to do with resources becoming cheaper, with the knowledge being freely available in the market. This is now on a large scale.

In order to keep up with this we need at least minimum best practices. Somebody mentioned earlier, the worm outbreak, which really was enabled by a vulnerability that was quite old. That just points out that a lot of companies are not doing what they could do easily.

Enterprise architecture is the cornerstone of making security simpler and therefore more effective. The more you can plan, simplify structures, and build in security from the get-go, the more bang you get for the buck.

It's just like building a house. If you don't think about security, you have to add it later, and that will be very expensive. If it's part of the original design, then the things you need to do to secure it at the end will be very minimal. Plus, any changes down the road will also be easier from a security point of view, because you built for it, designed for it, and most important, you're aware of what you have.

Most large enterprises today struggle even to know what architecture they have. In many cases, they don't even know what they have. The trend we see here with architecture and security moving closer together is a trend we have seen in software development as well. It was always an afterthought, and eventually somebody made a calculation and said, "This is really expensive, and we need to build it in."

What we're seeing from a macro perspective is that the IT function within large enterprises is changing. It's undergoing this radical transformation, where the CSO/CISO is becoming a consultant to the business. The CSO/CISO is recognizing, from an operational risk perspective, what could potentially happen to the business, then designing the policies, the processes, and the architectural principles that need to be baked in, pushing them into the operational organization.

From an IT perspective, it's the individuals who are managing the software development release process, the people that are managing the changing configuration management process. Those are the guys that really now hold the keys to the kingdom, so to speak.

... My hope is that security and operations become much more aligned. It's hard to distinguish today between operations and security. So many of the functions overlap. I'll ask you again, changing configuration management, software development and release, why is that not security? From my perspective, I'd like to see those two functions melding.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on cloud computing and enterprise architecture

Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information

TOGAF 9 Commercial Licensing program information

Friday, February 13, 2009

Interview: Guillaume Nodet and Adrian Trenaman on Apache ServiceMix and role of OSGi in OSS clouds

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Progress Software.

Read a full transcript
of the discussion.

Apache Software Foundation open source projects, OSGi, service-oriented architecture (SOA) developments, and cloud computing trends are converging. The do more for less mandate of the day is accelerating interest in how these open source technologies and deployment models can work well together.

As SOA and open-source projects have already collided, and as OSGi is gaining favor as the container model de jour, it makes a great deal of sense to apply these software advances to the need for higher productivity and lower total costs on the business side. Open source on-premises enterprise clouds that can interact well with open standards-oriented third-party clouds may well become the de facto boundaryless services fabric approach. [Access other FUSE community podcasts.]

To discern how open source infrastructure trends saddle up to the private cloud hub-bub, I recently talked with some thought leaders and community development leaders to assess the possible patterns of adoption. I interviewed Guillaume Nodet, software architect at Progress Software and vice president of Apache ServiceMix at Apache, and Adrian Trenaman, distinguished consultant at Progress Software.

Here are some excerpts:
Trenaman: I think open source becomes a very natural and desirable approach in terms of the technologies that you are going to use in terms of accessing the cloud and actually implementing services on the cloud. Then, in order to get those services there in the first place, SOA is pivotal. The best practices and designs that we got from the years we have been doing SOA certainly come into play there.

Certainly, you could always see the ESBs being sort of on the periphery of the cloud, getting data in and out. That's a clear use case. There is something a little sweeter, though, about Apache ServiceMix, particularly ServiceMix 4.0, because it's absolutely geared for dynamic provisioning.

You can imagine having an instance of ServiceMix 4.0 that you know is maybe just an image that you are running on several virtual machines. The first thing it does is contact a grid controller and says, “Well, okay, what bundles do you want me to deploy?” That means we can actually have the grid controller farming out particular applications to the containers that are available.

If a container goes down, then the grid controller will restart applications or bundles on different computing resources. With OSGi at the core of ServiceMix, at the core of the ESB, that’s a step forward now in terms of dynamic provisioning and really like an autonomous competing infrastructure.

... For me, what the OSGi gives us is clearly a much better plug-in framework, into which we can drop value-added services and into which we can extend. I think the OSGi framework is great for that, as well as in terms of management, maybe moving toward grid computing. The stuff that we get from OSGi allowed us to be far more dynamic in the way we provision services.

Nodet: Another thing I just want to add about ServiceMix 4.0, complementing what Adrian just said, is that ServiceMix split into several sub-projects. One of them is ServiceMix Kernel, which is an OSGi enhanced runtime that can be used for provisioning education, and this container is able to deploy virtually any kind of abstract. So, it can support Web applications, and it can support JBI abstracts, because the JBI container is reusing it, but you can really deploy anything that you want.

So, this piece of software can really be leveraged in cloud infrastructure by virtually deploying any application that you want. It could be plain Web services without using an ESB if you don’t have such a need. So it's really pervasive.

... ServiceMix has long been a way that you can distribute your SOA artifacts. ServiceMix is an ESB and by nature, it can be distributed, so it's really easy to start several instances of ServiceMix and make them seamlessly talk together in a high availability way.

The thing that you do not really see yet is all the management and all the monitoring stuff that is needed when you deploy in such an architecture. So ServiceMix can really be used readily to fulfill the core infrastructure.

ServiceMix itself does not aim at providing all the management tools that you could find from either commercial vendors or even open-source. So, on this particular topic, ServiceMix, backed by Progress, is bringing a lot of value to our customers. Progress now has the ability to provide such software.

Trenamen: We recently finished a project in mobile health, where we used ServiceMix to take information from a government health backbone, using HL7 formatted messages, and get that information onto the PDAs of the health-care officials like doctors and nurses. So this is a really, really interesting use case in the healthcare arena, where we’ve got ServiceMix in deployment.

It’s used in a number of cases as well for financial messaging. Recently, I was working with a customer, who hoped to use ServiceMix to route messages between central securities depositories, so they were using SWIFT messages over ServiceMix. We’re getting to see a really nice uptake of new users in new areas, but we also have lots of battle-hardened deployments now in production.

... OSGi is the top of the art, in terms of deployment. It really is what we’ve all wanted for years. I’ve lost enough follicles on my head fixing class-path issues and that kind of class-path hell.

OSGi gives us a badly needed packaging system and a component-based modular deployment system for Java. It piles in some really neat features in terms of life cycle -- being able to start and shut down services, define dependencies between services and between deployment bundles, and also then to do versioning as well.

The ability to have multiple versions of the same service in the same JVM with no class-path conflicts is a massive success. What OSGi really does is clean up the air in terms of Java deployment and Java modularity. So, for me, it's an absolute no-brainer, and I have seen customers who have led the charge on this. This modular framework is not necessarily something that the industry is pushing on the consumers. The consumers are actually pulling us along.

I have worked with customers who have been using OSGi for the last year-and-a-half or two years, and they are making great strides in terms of making their application architecture clean and modular and very easy and flexible to deploy. So, I’ve seen a lot of goodness come out of OSGi and the enterprise.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Progress Software.

Thursday, February 12, 2009

WSO2 announces componentized framework for expansive SOA deployment and integration

A full and componentized service-oriented architecture (SOA) framework is the latest offering from WSO2, the open-source SOA platform provider.

The Mountain View, Calif. company has announced the general availability of WSO2 Carbon, which will allow users to deploy only the components they need and simplify middleware integration. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

It's amazing to me that this amount of open source SOA and web development and deployment technology is available in open source. It's really an impressive feat, with many parties around the world responsible, to produce so much code in a fairly brief time. Congrats to the effort, and the whole Apache model.

Built on the increasingly popular OSGi specification, the framework is accompanied by four related products:
  • WSO2 Web Services Application Server (WSAS) 3.0
  • WSO2 Enterprise Service Bus (ESB 2.0)
  • WSO2 Registry
  • WSO2 Business Process Server (BPS)
The Carbon framework provides such enterprise capabilities as management, security, clustering, logging, statistics, and tracing. Also included is a "try it" testing function. Developers can deploy, manage, and view services from a graphical unified management console.

The componentized OSS platform changes the way developers implement SOA middleware. They no longer need to download both the WSAS and ESB as separate products. They can, for example, start with the ESB, which includes the framework, and then add the other functionality as components.

The components of the Carbon platform are based on Apache Software Foundation projects, including Apache ODE, Axis2, Synapse, Tomcat, Axiom, among many core libraries. Other key features include:
  • Full registry/repository integration that allows a complete distributed Carbon fabric to be driven from a central WSO2 Registry instance.
  • Eventing support, including a WS-Eventing Broker, to support event driven architectures (EDA).
  • WS-Policy Editor for defining Web service dependencies and other attributes.
  • Transactional support for JMS and JDBC, facilitating error handling for services and ESB flows.
  • Transport management control for all services.
  • Active Directory and LDAP support across all products, providing integration into existing user stores including Microsoft environments.
WSAS 3.0 offers enhanced flexibility for configuring SOAs. Developers can separate the administration console logic from the service-hosting engine of WSAS 3.0, making it possible to use a single front-end server to administer several back-end servers simultaneously.

Other enhancements in the WSAS 3.0 are:
  • XSLT-to-XQuery transformation for Java and Data Services
  • Enhanced administration user interface,
  • WS-Policy Editor to configure services using the W3C standard.
  • Improved support for Microsoft Active Directory allowing administrators to integrate WSAS into existing user management infrastructure.
ESB 2.0 allows developers to plug in extra components to handle tasks like service hosting, business process management and SOA governance without disrupting existing flows and configuration. Developers can also separate the management console logic from the ESB routing and transformation engine of the ESB 2.0, making it possible to use a single front-end management console to administer several back-end ESB instances simultaneously.

Other key features of the WSO2 ESB 2.0 include:
  • Enhanced sequence designer, which lets users develop ESB flow logic using a wide variety of built-in mediators, as well as customer provided code.
  • An enhanced proxy service wizard, which provides the ability to create a robust proxy service using simple editors to configure the behavior.
  • Support for events
  • A new security management wizard.
Registry 2.0 includes significant improvements to the publication and management of WSDL-based services. It lets users define custom lifecycles with conditional state transitions. Additionally, it offers well-defined extension points for a flexible, plug-in approach to linking resources and allowing users to encode their own governance rules and polices.

WSO2 BPS, powered by the Apache Ode BPEL engine, provides a full BPEL runtime, deploys business processes written following the WS-BPEL 2.0 and BPEL4WS 1.1 standards, and manages BPEL packages, processes and instances. Other key features include:
  • Eclipse BPEL support, including the ability to work with Eclipse BPEL tooling and the availability of a plug-in to deploy Eclipse-developed processes in WSO2 BPS.
  • Caching and throttling support for business processes to ensure optimal performance and availability.
  • Shutdown/restart support, which allows the administrator to suspend, resume and terminate processes.
  • Transport management allowing simple configuration of JMS, Mail, File and HTTP transports.
  • Full security via the core Carbon framework, including authentication and authorization, with full support for WS-Trust, WS-Security and WS-SecureConversation.
Four products based on Carbon are available for download today from http://wso2.com: the WSAS 3.0, ESB 2.0, WSO2 Registry 2.0, and the new WSO2 Business Process Server 1.0. Developers need to download one of the four products in order to get the core Carbon framework and unified management console that drive all of the components.

Individual components will be available within one month, allowing developers to simply add new capabilities to any of the core products as needed. Componentized versions of the WSO2 Mashup Server and WSO2 Data Services are expected to roll out in mid-2009.

Incidentally, in October, a new data services offering arrived from WSO2 that allows a database administrator (DBA) or anyone with a knowledge of SQL to access enterprise data and expose it to services and operations through a Web services application-programming interface (API).

TOGAF 9 advances IT maturity while offering more paths to architecture-level IT improvement

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Read a full transcript of the discussion.

The Open Group, a vendor- and technology-neutral consortium, in early February delivered TOGAF 9, an enterprise architecture framework. TOGAF 9 represents a departure for enterprise architecture frameworks in general.

It's larger, more mature, and modular to allow folks to enter it from a variety of perspectives. It takes on a much more significant business services and accomplishments perspective. [Read more on a panel discussion about the importance of enterprise architecture.]

While IT practitioners and architects will be looking over TOGAF 9 deeply, it’s also going to be of interest to the business side of the enterprise and offers a way for them to understand more about how IT can service their business needs. [I also last week spoke to Allen Brown, the CEO of The Open Group on trends in IT.]

To gain deeper insights into how IT architects can bring value to businesses, I recently interviewed two TOGAF experts, Robert Weisman, CEO and principal consultant for Build The Vision, and Mike Turner, an enterprise architect at Capgemini.

Here are some excerpts:
I can see architecture being an integral part of the business planning process. It structures the business plans and makes sure that the objectives are realizable. In other words, we can use the acronym SMART, specific, measurable, actionable, realizable, and time-bound. What TOGAF 9 does is provide an overarching vision and capability with which to cooperate.

I see architecture as a set of tools and techniques that can help you achieve what you want to do as a business. Taking architecture in isolation is not necessarily going to achieve the right things for your organization, because you actually need to have the direction as an input for architecture to support achievement of a particular outcome.

Architecture is really a vital tool for is being able to assure that the correct business outcome is achieved. You need to have a structured approach to how you define the problem space that businesses are facing, then define the solution space, and define how you move from where you are right now to where you want to be.

TOGAF 9, first of all, is more business focused. Before that it was definitely in the IT realm, and IT was essentially defined as hardware and software. The definition of IT in TOGAF 9 is the lifecycle management of information and related technology within an organization. It puts much more emphasis on the actual information, its access, presentation, and quality, so that it can provide not only transaction processing support, but analytical processing support for critical business decisions.

The gestation took five years. I've been part of the forum for five years working on the TOGAF 9. Part of the challenge was that we had such an incredible take up of TOGAF 8. Once a standard has been taken up, you can’t change it on a dime. You don’t want to change it on a dime, but you want to keep it dynamic, update it, and incorporate best practices. That would explain some of the gestation period. TOGAF 8 was very successful, and to get TOGAF 9 right, it was a little longer cycle, but I think it’s been well worth the wait.

If you look at the industry in general, we're going through a process where the IT industry is maturing and becoming more stable, and change is becoming more incremental in the industry. What you see in architecture frameworks is a cycle of discovery, invention, and then consolidation that follows, as consensus is reached.

One thing that’s really key about TOGAF 9 is that it takes a lot of ideas and practices that exist within individual organizations or proprietary frameworks, building a consensus around it, and releasing it into a public-domain context.

Once that happens, the value you can get from that approach increases exponentially. Now, you're not talking about going to one vendor and having to deal with one particular set of concepts, and then going to a different vendor and having to deal with another set of concepts, and dealing with the interoperability between those.

You're in a situation where the industry agrees this is the way to do things. Suddenly, the economies of scale that you can get from that, as all the participants in the industry starts to converge on that consensus, mean that you get a whole set of new opportunities about how you can use architecture.

TOGAF 9 is, in certain ways, an evolutionary change and in certain other ways a revolutionary change. The architecture development methodology has basically remained similar. However, transforming the architecture from concept into a reality has basically been expanded pretty dramatically, with a great many lessons learned. So, architecture transformation is a large one. Various architectural frameworks have been incorporated into it.

A great many concepts that allow enterprise architecture to be molded with operations management, with system design, portfolio management, business planning, and the Governance Institute's COBIT guidelines and other industry standards have also been incorporated into TOGAF.

Also, there's been a major contribution by such companies as Capgemini, with respect to artifacts and structure. The content meta model is a huge contribution.

The term SOA is old wine in new bottles. It's been around for a long time. If you just have a service catalog, if you have duplicate services, it becomes very evident. That’s one of the advantages of the repositories -- you can have an insight into what you actually have.

TOGAF, from its outset in the early 1990s, has been service oriented for that. Just by applying TOGAF, you have a chance of doing your Gap Analysis, of having the visibility into what you have, which makes it not only efficient, but effective from a business perspective.

TOGAF allows you to understand what makes your business good and then identify what your services are in a way that considers all the different angles. Once that’s defined, you can then put the right technology underneath that to realize what the business is actually looking for. That’s something that can have an absolutely transformational effect on your business.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Reporting on the TOGAF 9 launch

Panel discussion on security trends and needs

Panel discussion on cloud computing and enterprise architecture

Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information

TOGAF 9 Commercial Licensing program information

Who makes most rain from IBM-Amazon cloud deal? Oracle.

It just goes to show how perplexing the gathering cloud marketplace is that the punditry are hesitant to meaningfully analyze this important development.

But Amazon -- with a history of bold and long-term bets -- and IBM -- with a history of making markets whether it's right about the future or not -- have made the best of bedfellows in the deal announced late Wednesday.

Why? Because IBM has taken the cloud bait ... big time. And Amazon has just partnered with the best enterprise IT channel on Earth.

Together they form the irresistible gravitational black hole from which Microsoft can not escape. And Google is building another black hole right next door. And so is Salesforce.com.

Can Microsoft change universal physics? Not likely.

What this deal means is that Microsoft will need to adopt the cloud model all the more quickly and comprehensively -- across its software lines, not just a few. It's going to be Live Stack, not just Live Mesh. It's going to be buy once, run any which way.

It's build your on-premises cloud on IBM and insure against peaks and troughs with the elastic AWS hand-off. No more 20 percent utilization on umpteen licensed servers to guarantee reliability (I'm talking to you, Exchange Server).

And given the IBM license pincher move, the fungible enterprise-AWS licenses scheme will help shrink Microsoft's margins all the faster and deeper as a result. IBM will be selling cloud economics against Microsoft Software Assurance economics with a world-class and hungry sales force. Ouch.

You see, IBM can monetize across more business types -- hardware, storage, professional services, systems integration, infrastructure software, groupware software, specialized outsourcing and applications, and a lot more. Microsoft not so. IBM can adopt the cloud aggressively and find new innovative models from its diversified portfolio. Microsoft is hoping for the best with developers more than the operators, because it has no choice (and Wall Street knows it).

I don't expect Microsoft to do any similar deal with any cloud provider other than Microsoft. IBM, on the other hand, can do similar deals with any cloud provider with the chops to produce the reliable and cost-effective compute fabric that's open to its SuSE Linux stack. The more clouds the better for IBM, while Microsoft will compete against those clouds. It's the MS-DOS license deal in reverse at a higher abstraction.

Amazon, too, can tee up any number of enterprise software providers to channel the humongous global enterprise software market to AWS ... from a tickle to a stream to (who knows?). But that's still good money. And Amazon has a huge and growing lead in the ecumenical cloud department.

So we come to Oracle. Larry Ellison's entertaining position on cloud is a hedge. He knows the substantial cloud economy is inevitable, and he knows its at least 10 years in the making. And he knows the transition will be ugly and bloody.

Best to let those two old antagonists IBM and Microsoft beat the crap out of each other, with Amazon as Burgess Meredith's Mickey Goldmill to IBM's Rocky and Microsoft's Apollo Creed. Then the build, buy or partner decision can be made by Oracle after all the money has been taken from the traditional enterprise data, applications, development, infrastructure and integration model (which has plenty of legs).

It's too soon to tell whether the rainmaker-enabled marketplace approach of IBM (remember Java, Linux, n-tier) will beat out the shoot-for-the-moon strategy of Microsoft when it comes to the cloud. But I like Oracle's margins better through 2016 as the battle ensues.

Monday, February 9, 2009

Strong IT architecture doubly important in tough economic times, says Open Group expert panel

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Read a full transcript of the discussion.

The Open Group, a vendor- and technology-neutral consortium, last week delivered TOGAF 9, an enterprise architecture framework. As part of the festive opening ceremony for TOGAF 9's arrival, a panel of experts examined the value and role of enterprise IT architecture in light of a dynamic business environment.

The topics also addressed how IT can better communicate and collaborate with the business interests around them. To gain deeper insights into how IT architects can bring value to businesses, I had the pleasure of moderating the panel discussion at The Open Group's 21st Enterprise Architecture Practitioners Conference in San Diego on Feb. 2. [See a related interview with The Open Group CEO Allen Brown.]

Panelists included Tony Baer, senior analyst at Ovum; Janine Kemmeren, enterprise architect at Getronics Consulting and chair of the Architecture Forum Strategy Working Group in The Open Group; Chris Forde, vice president and technology integrator at American Express and chair of the Architecture Forum in The Open Group; Jane Varnus, architecture consultant for the enterprise architecture department at the Bank of Montreal, and Henry Peyret, principal analyst at Forrester Research.

Here are some excerpts:
The degree of change that we're seeing in the economy and its implications for businesses are -- Nick used the phrase Tsunami during his presentation earlier today -- and that’s really not an understatement. What you have to do is keep your eye on the ball, and the ball is not enterprise architecture. The ball is where the business needs to manage and operate itself effectively.

When the rules change, you can’t just reach back into the same old bag of tricks around architecture. You have to sit down with your partner and say, "Okay, what has changed? Why has it changed, and how do we respond to this?" You need good people with good heads on their shoulders to be able to do that.

... There are a lot of issues with the way IT operates. But in having a conversation about enterprise architecture and moving the business ... We don’t want to have the conversation about architecture. We want to have the conversation about what it is that’s going to make their business more effective. Some of those issues may be inter-business unit related, not specific to IT, and that’s a good conversation to have.

... The problem that IT has had perennially is that we have over promised, we have under delivered, and we have overcharged. The whole idea of adopting more consistent practices is that hopefully you can avoid having to reinvent the wheel every time and stop making all those damn mistakes.

The thing we can’t do is go back to the business and start talking technology to them. They're not interested in how we support them. What they're interested in is that we should, at a reasonable cost, be reasonably flexible, be absolutely reliable, and be creative. Lag is a big problem. We have to address their concern that we are a partner who is responsive.

So, my short advice is that we have to learn to talk to the business better in their terms, become more tuned in, translate whatever solution we have, and express it back in the terms of that problem. I don’t know what that problem would be in anyone else’s business, but don't mention SOA and don't mention the cloud.

One thing that's probably going to be useful is a degree of transparency into the IT function. When the business clearly understands what’s driving the quotes coming back to them, they're in a better position to determine what kind of investments they really need to make. In the course of developing that transparency, it causes IT to be more introspective about the way it operates.

There’s a certain set of conversations that needs to occur about how effective the IT operation actually is. This is also in context with other business units. We talk about IT as if it's separate from the business, when, in fact, it's a component of our business operation just like others. It has a certain level of importance and a relationship to certain types of technology, but it isn’t the be all and end all.

We just have to get into a better conversation with the business partners about what’s driving the behaviors in IT, and transparency is one way to do that.

The new way to demonstrate value is to explain that we will be able now to make something faster in terms of time to market, time to design, and time to deliver. All of those things are what we call key agility indicators.

It's the flexibility aspect, again, but not the flexibility that every IT provider is talking about. Why? Because they are not defining what type of flexibility they are talking about. We need to specify a key agility indicator at a business level.

We need also to assess our process to say that perhaps we need to deliver that in three months. Unfortunately, our current process and systems are able to deliver that only in five months. How could we shorten that? How could we bring in new practices and new ways to do that, or perhaps a new technology?

At the end of the day, that’s what enterprise architecture is all about. It's not about devising frameworks. It's about making your performance consistent, rational, and understandable.

What I think we have to fear is lag and inertia. That’s what we really have to fear. One of the things I have actually been very cheered about with TOGAF 9 is that it's taken some important steps in the right direction, in terms of making the practice and the learning of enterprise architecture more accessible, and it's modularized things.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on security trends and needs

Panel discussion on cloud computing and enterprise architecture

Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information

TOGAF 9 Commercial Licensing program information

Interview: The Open Group's Allen Brown on advancing value of enterprise IT via architecture

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Read a full transcript of the discussion.

The Open Group, a vendor- and technology-neutral consortium, last week delivered TOGAF 9 at the organization's 21st Enterprise Architecture Practitioners Conference in San Diego.

At the juncture of this new major release of the venerable enterprise IT architecture framework, it makes sense to examine the present and future of The Open Group itself, some of its goals, and what else it does for its members.

The global organization is actively certifying thousands of IT architecture practitioners, while using the commercial license to increase the contributor flow of best architecture practices back into TOGAF. Think of it as open source for best architecture principles and methods.

To better understand how The Open Group operates and drives value to its members, I recently interviewed Allen Brown, president and CEO of The Open Group.

Here are some excerpts:
The role of architecture is more important right now because of the complexity, because of the need to integrate across organizations and with business partners. You've got a situation where some of the member companies are integrated with more than a thousand other business partners. So, it's difficult to know where the parameters and boundaries of the organization are.

If you've trained everyone within your organization to use TOGAF, they're all speaking a common language and they're using a common approach. It's a common way of doing things. If you're bringing in systems integrators and contractors, and they are TOGAF certified also, they've got that same approach. If you're training people, you can train them more easily, because everyone speaks the same language.

One member I was talking to said that they've got something like 500,000 individuals inside their infrastructure that are not their own staff. So this is a concern that's becoming top of mind for CIOs: Who's in my infrastructure and what are they doing.

We've got, on one hand, the need for enterprise architecture to actually understand what's going on, to be able to map it, to be able to improve the processes, to retire the applications, and to drive forward on different processes. We've also got the rising need for security and security standards. Because you're integrated with other organizations, they need to be common standards.

... Security is now becoming top of mind for many CIOs. Many of them have the integration stuff sorted out. They've got processes in place for that, and they know how they're going to move forward with enterprise architecture. They're looking for more guidance and better standards -- and that's why TOGAF 9 is there.

We're now looking at other areas. We always look at new areas and see whether there is something unique that The Open Group could contribute where we can collaborate with other organizations and where we can actually help move things forward.

We're looking at cloud. We don't know if it's something that we can contribute to at this point, but we're examining it and we will continue to examine it.

The Open Group is broader than just enterprise architecture. The architecture forum is one of a number of forums including Security/Identity Management, the Platform, the UNIX standards, Real-Time and Embedded Systems, Enterprise Management Standards, and so forth. A lot of attention has been focused on enterprise architecture, because of the way that TOGAF has contributed, and some of the professional standards have raised.

TOGAF 9 really needed to add some more to TOGAF 8. In March 2007, I did a survey by talking to our members -- really just asking them open-ended questions. What are the key priorities you want from the next version of TOGAF? They said, "We need better guidance on how to align with our business and be able to cascade from that business down to what the IT guys need to deliver. We need more guidance, we need it simpler to use."

TOGAF 8 was very much focused on giving guidance on how to do enterprise architecture, and the key thing was the architecture development method. What they've done now is provide more guidance on how to do it, made it more modular, and made it easier to consume in bite-sized chunks.

Those were the two key driving forces behind where we were going, a more modular structure, and things like that. Trying to do those things, the members focused on how to bring that forward, and it's taken a lot of work.

Then they've added other things like a content framework. The content framework provides a meta model for how you can map from the architecture development method, but it also provides the foundation for tools vendors to construct tools that would be helpful for architects to work with TOGAF.

There are a couple of other things that we've done. First, we've introduced the IT Architect Certification (ITAC) program. That provides a certification to say not only that this person knows how to do architecture, but can demonstrate it to their peers.

... We've had to deal with much larger numbers of members and contributors, but it's not just TOGAF. It's not just a case of having a framework, a method, or a way of helping organizations do enterprise architecture. We're also concerned with raising the level of professionalism.

The ITAC certification is agnostic on method and framework. You don't have to know TOGAF to do that, but you have to be able to convince a board-level review that you do have experience and that you're worthy of being called an IT architect.

It requires a very substantial resume, and a very substantial review by peers to say that this person actually does know, and can demonstrate they've got the skills to do IT architecture.

If you can imagine a large consortium where you've got 300 member organizations -- which is a lot of people at the end of the day -- and everyone is contributing something and a smaller number is doing a real heavy lifting, you've got to get consensus around it. They have done a huge amount of work.

There is a capability framework, not a maturity model, but it's way of helping folks to set up their capability. There are a lot of things that now in TOGAF 9 that have built on the success of TOGAF 8, it has taken a huge amount of work by our members.

The great thing about TOGAF 9 is that we've had such a great reception from the analysts, bloggers, and so on. Many of them are giving us recommendations, and they say, "This is great, and here are my recommendations for where you go."

We've got to gather a lot of that together, and the architecture forum, the members, will take a look at that and then figure out where the plan goes. I know that they're going to be working on things more general, as well as TOGAF in the architecture space.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on security trends and needs

Panel discussion on cloud computing and enterprise architecture

Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information

TOGAF 9 Commercial Licensing program information

Citrix brings high definition to virtual desktops with XenDesktop 3 and HDX

Citrix Systems last week delivered a one-two punch in the battle for virtual desktop infrastructure (VDI) differentiation with the introduction of XenDesktop 3 and Citrix HDX high-definition technology, promising to lower costs associated with servers and storage in the data center.

XenDesktop 3
, a key component of the Santa Clara, Calif. company's Citrix Delivery Center now incorporates several of the HDX technologies, providing a richer multimedia experience for user and increasing the number of desktops per server. According to the announcement, the latest version of XenDesktop can also host twice as many virtual desktops from a single server.

Also, the new version can deliver Microsoft Windows desktops from a common set of centrally managed images that can be run either as a hosted application in the data center or locally on a PC or thin-client device.

Another feature is the HDX media streaming capability, by which Desktop 3 sends compressed media streams to endpoint devices and plays them locally. This allows IT administrators to have the applications run wherever it's more efficient and cost effective.

I'm still curious about Abode Flash presentations via XenDesktop VDI. Desktone and Wyse have been working on that for some time. Wyse is also partnering with Citrix, but I didn't see any mention of Flash (perhaps in a genuflection to Microsoft?).

Management of VDI is also simplified in XenDesktop 3 with a fully integrated profile management, which provides a consistent personalized experience for each user every time they log in.

The new features include broad support for smart-card security, which moves virtual desktop capability in those markets -- government, financial services, and healthcare -- that rely on smart-card authentication.

Rounding out the new XenDesktop capabilities is USB plug-and-play capability for transparent support of all types of local devices, including digital cameras, smart phones, MP3 players, and scanners.

I'm a big fan of VDI and think it offers even more in a strapped economy. If netbooks are all the rage, then why not VDI too (or VDI on older PCs that can't well run Vista or Windows 7?)

Also announced Wednesday was the HDX high-definition technology, which adds enhancements for multimedia, voice, video, and 3D graphics. It also includes “adaptive orchestration” technology that senses underlying capabilities in the data center, network and device, and dynamically optimizes performance across the end-to-end delivery system to fit each unique user scenario. This allows HDX-enabled products to leverage the latest user experience innovations developed by third-party software, server, device and processor partners.

Six categories of HDX technologies work together to provide multimedia capability. These include a broad range of new and existing technologies that extend throughout the Citrix Delivery Center product family.
  • HDX MediaStream – Accelerates multimedia performance by sending compressed streams to endpoints and playing them locally.
  • HDX RealTime – Enhances real-time communications using advanced bi-directional encoding and streaming technologies to ensure a no compromise end-user experience.
  • HDX 3D – Optimizes the performance of everything from graphics-intensive 2D environments to advanced 3D geospatial applications using software and hardware based rendering in the datacenter and on the device.
  • HDX Plug-n-Play – Enables simple connectivity for all local devices in a virtualized environment, including USB, multi-monitor, printers and user-installed peripherals.
  • HDX Broadcast – Ensures reliable, high-performance acceleration of virtual desktops and applications over any network, including high-latency and low-bandwidth environments.
  • HDX IntelliCache – Optimizes performance and network utilization for multiple users by caching bandwidth intensive data and graphics throughout the infrastructure and transparently delivering them as needed from the most efficient location.
Citrix XenDesktop 3 will be generally available from authorized Citrix partners this month, and from the Citrix website at http://www.citrix.com/xendesktop. Suggested retail pricing begins at $75 per concurrent user.

Monday, February 2, 2009

Open Group debuts TOGAF 9, a free IT architecture framework milestone that allows easier ramp-up, clearer business benefits

As part of the 21st Enterprise Architecture Practitioners Conference here in San Diego this week, The Open Group has delivered TOGAF version 9, a significant upgrade to the enterprise IT architecture framework that adds modularity, business benefits, deeper support via the Architecture Development Method (ADM) for SOA and cloud, and a meta-model that makes managing IT and business resources easier and more coordinated.

One of my favorite sayings is: "Architecture is destiny." This is more true than ever, but the recession and complexity in enterprise IT departments make the discipline needed to approach IT from the architecture level even more daunting to muster and achieve. Oh, and slashed budgets have a challenging aspect of their own.

Yet, at the same time, more enterprise architects are being certified than ever. More qualified IT managers and planners are available for hire. And more dictates such as ITIL are making architecture central, where it belongs, not peripheral. The increased use of SOA, beginnings of cloud use, and need for pervasive security also auger well for enterprise architecture (EA) to blossom even in tough times.

TOGAF 9 aims to remove the daunting aspects of EA adoption while heightening both the IT and business value from achieving good methods for applying a defined IT architecture. With a free download and a new modular format to foster EA framework use from a variety of entry points, TOGAF 9 is designed to move. It also begins to form an uber EA framework by working well with other established EA frameworks, for a federated architectural framework benefit. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

I'll be blogging and creating some sponsored podcasts here in San Diego this week at the Enterprise Architecture Practitioners Conference, so look for updates on keynotes, panel discussions and interviews.

I'm especially interested in how architecture and the use of repositories help manage change. This may end up the biggest financial and productivity payback from those approaching IT from a systemic and managed via policies and governance approach.

Well-structured EA repositories of both IT and business meta-model descriptions solves complexity, adds agility, saves puts organizations in future-proof position. They can more readily accept and adapt to change -- both planned for an unplanned. Highly unpredictable and dynamic business environments benefit from EA and repository approach, clearly.

TOGAF 9 is showing the maturity for much wider adoption. The Architecture Development Method (ADM) can be applied to SOA, security, cloud, hybrids, and federated services ecologies. There is ease in migration from earlier TOGAFs, or from a start fresh across multiple paths of elements of EA. Indeed, TOGAF 9's modular structure now allows all kinds of organizations and cultures to adapt TOGAF in ways that suit specific situations and IT landscapes.

The Open Group is a vendor-neutral and technology-neutral consortium, and some 7,500 individuals are TOGAF certified. So far, 90,000 copies of the TOGAF framework have been downloaded from The Open Group’s website and more than 20,000 hard copies of the TOGAF series have been sold.

If architecture is destiny, that TOGAF is a philosophy on taking control of your IT destiny. Better for you to take control of your destiny, than the other way around, I always say.

Sunday, February 1, 2009

Progress Software's Actional Diagnostics gives developers better view into services integrity

Progress Software has leveraged technology from recently acquired Mindreef's SOAPscope to help detect and mend service integrity issues early in the software development cycle.

The Bedford, Mass. company last week announced the development of Progress Actional Diagnostics. This standalone quality and validation desktop product allows developers to build and test XML-based services, including SOAP, REST, and POX. Once services are identified for use, developers can inspect, invoke, test, and create simulations for prototyping and problem solving. [Disclosure: Progress is a sponsor of BriefingsDirect podcasts.]

Progress also last week held its annual analyst conference in Boston, and it was clear from the presentations that the plucky company is expanding its role and its value to the business solutions level.

As a major OEM provider to other software makers, the Progress brand has often played second fiddle to other brands in the Progress stable (some from acquisitions), such as Sonic, Actional, Apama, DataDirect, IONA, and FUSE. But the company is working to make Progress more identifiable, especially at a business level.

Progress, TIBCO and Sofware AG are the remaining second-tier software infrastructure providers, following a decade-long acquisitions spree and consolidation period.

As such Progress, with annual revenues in the $500 million range, is also setting itself up to move from SOA and SaaS support to take those capabilities and solutions (and OEM model) to the cloud model. Among a slew of glowing customer testimonials at the conference last week, EMC showed how a significant portion of its burgeoning cloud offerings that are powered by Progress infrastructure products.

I think we can expect more love between EMC and Progress, as well as more Progress solutions (in modular, best of breed or larger holistic deployments) finding a place under the hood of more cloud offerings. That will be double apparant as the larger players like IBM, Oracle, and Microsoft create their own clouds. We're heading into some serius channel conflicts as these clouds compete with a rapidly fracturing market.

I was also impressed with the OSGi support that Progress is bringing to market, something that should appeal to many developers and architects alike.

Back on the product news, Actional Diagnostics includes a new feature called Application X-Ray, which allows developers to see what happens inside their service. For example, they can see how downstream services are being used, what messages are sent on queues, details of Enterprise JavaBean (EJB) invocations, database queries, and other relevant interdependencies along their transaction path.

This helps them identify why tests have failed or why services are not performing as designed, so that a service can be reengineered as needed before it moves to production.

In addition, load checking lets users test the performance and scalability of services before they are delivered to a performance testing team. Developers can check dozens of simultaneous threads or users per service, monitor CPU utilization and how much Java VM is being used. These are the kinds of integrity backstops that will be in high demand in the cloud and for PaaS buildouts.

Actional Diagnostics is currently in beta testing with customers and will soon be available as a free download. Developers interested in being sent an alert when the software download is available can register at: http://www.progress.com/web/global/alert-actional-diagnostics/index.ssp.