Wednesday, December 3, 2008

Active Endpoints beefs up visual SOA orchestration with added features and expanded OS support

Active Endpoints, the inventor of the visual orchestration system (VOS), has spruced up its flagship offering with enhancements that include operational improvements and expanded operating system and database support.

The Waltham, Mass.-based company has released ActiveVOS 6.0.2, which includes new reporting capabilities, an enhanced ability to reuse plain old Java objects (POJOs), and new platform and operating system support.

The new version of ActiveVOS also addresses the ongoing debate over whether service oriented architecture (SOA) modeling languages such as business process modeling notation (BPMN) can be directly executed by the business process management system (BPMS) or first serialized into an execution-focused language like business process execution language (BPEL). [Disclosure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]

According to Active Endpoint's Alex Neihaus, vice president of marketing:
We support the latter approach on the simple theory that models aren’t fully specified and therefore cannot be executed on real machines – on which everything must be specified. That’s why we believe most modeling-oriented BPMSs end up frustrating business analysts and forcing developers to write lots of Java code to implement simple processes.
Neihaus said that the new version is the first iteration of something the company intends to pursue: unification of the visual styles of creating models and process design. An example of this can be found at the Active Endpoints Web site, where the “bordered style” link in the second row of the table shows a screen shot of the new design. Neihaus adds:
It’s just a beginning, but our BPEL processes are beginning to look and feel like BPMN-style models, and vice versa. As we do more of this, we’ll obviate the debate, bring analysts and developers closer together and make it possible to deliver BPM applications more easily.
In other enhancements to the product, service level reporting provides a granular perspective into average process response time, giving users the opportunity to identify and respond to bottlenecks that affect overall process performance.

New reporting capabilities include a summary of the response time of the top ten activities of the process, allowing operations staff to optimize running processes. Rich scheduling helps define the frequency of process execution using any combination of months, weeks, days, hours, minutes and seconds.

ActiveVOS now allows Java objects to maintain state, significantly increasing the number of use cases that can be fulfilled using the POJO capabilities. Previously, developers could reuse POJOs as native web services, providing the ability to invoke stateless Java objects directly from a process.

In addition to the server operating systems, application servers and database systems previously supported, ActiveVOS is now additionally certified with:
The latest version of ActiveVOS can be downloaded from www.activevos.com. A free, 30-day trial is available and includes support to assist in the evaluation.

ActiveVOS 6.0.2 is priced at $12,000 per CPU socket for deployment licenses. Development licenses are priced at $5,000 per CPU socket. Pricing is also available for virtualized environments.

Monday, December 1, 2008

Interview: HP’s Tim Hall on heightening roles of governance in SOA, cloud and managing dynamic business boundaries

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

As enterprises scale up of their use of service oriented architecture (SOA), proper governance is providing an insurance effect. By deploying governance alongside and in sync with SOA development and deployment capabilities, enterprises are growing the use of SOA without stumbling -- allowing companies to “crawl, walk, and run” to SOA without losing control.

SOA governance heightens the business benefits of services, increases IT efficiency returns, and reduces the risk that complexity could undermine the services lifecycle in large organizations. Services governance also sets the stage for leveraging cloud and third-party services, while managing the boundary between internal and external services.

To unpack the relationship between SOA, governance, cloud and IT management, I recently interviewed Tim Hall, director of SOA Products for HP Software and Solutions.

Here are some excerpts:
The purposes of SOA governance has been to set the architectural vision and direction, lay the ground rules under which those activities are going to take place, and then foster collaboration between architects, and other people who engage in the processes of building solutions for companies, be they consumer focused, or be they within enterprise IT.

The whole thing is tracking your progress, where are you in this journey. It's not about installing a new pack of middleware and then declaring victory. You really have to measure along the way what you are doing, and how far you have gotten. Some measures that people start off looking at are things like reuse.

The whole notion of providing a service is to hide the layers of abstraction and to hide the complexity behind layers of abstraction, so that we can make changes behind the scenes that don't necessarily disrupt or alter the offering of the service. There are a lot of examples of this in the real world. Why hasn't IT been able to do a better job of capitalizing on those things?

This is one of those transformation opportunities. We're not just talking about Web services. We're talking about different ways in which we need to be able to flexibly compose and offer capabilities back to the business through a channel called a service. ... The adoption of services as a fundamental unit of commerce, if you will, within IT does something very fundamental to the way that people work together.

From HP's perspective, we are definitely trying to make sure that the collaboration between architects, quality assurance professionals, and operations personnel are there. That's kind of announcing that the various solution offerings that we're bringing to market are to make sure that none of these is an island. Those control points can reasonably be connected and allow for collaboration across all the different participants.

We're learning lots of interesting things about IT, and in particular, the ways that we can do things better. The whole notion of instilling an architectural vision to support change and flexibility; to give tools to the folks who are building composite systems, so they can better manage the roles and responsibilities for the various people that are participating in that; and better communicate with operations is something that we haven’t done very well.

It's really a matter of mapping your organizational maturity and what you're trying to achieve with the appropriate tools. People shouldn't be running out and buying tools, unless they really understand what problems those tools are going to solve, and the fact that certain organizations can introspect what they have done in the past and say what problems they want us to solve and or avoid.

The lessons that we're learning ... are specifically being applied to SOA right now, [but] have more far-reaching implications. As we look at things, like the different compositional patterns for systems that are coming -- Web 2.0 technologies, Ajax, rich Internet applications (RIAs), putting front ends on some of these things, or cloud computing -- all of these things are interrelated. My question is, should we not be applying these fantastic concepts and activities that we have been establishing through SOA governance more broadly to support all of these different types of next-generation composition?

From HP's perspective the answer is absolutely. The question is at what point are we going to be talking about next-generation application lifecycle management, or next-generation application composition and stop talking about SOA by itself as an island.

... There are more people coming to the table, more constituents coming to say, “How can I connect to these governance activities that are going on for services, but really for the purpose of generating some new business outcomes?” That, to me, is tremendously exciting.

I think one of the things you're going to see -- I'm not sure how far in the future, it's coming up more and more these days -- is an emphasis on understanding the business-to-business connections, or what some folks will call "federation."

I want to be very specific when I say "federation," because it is one of those overloaded terms that creates a lot of mystery. If we can take the wraps off of federation, what we're talking about is a pattern for how to expose the capabilities that I own within my domain to other domains. Those other domains could be within my organization, they could be elsewhere, or they could be third parties.

The good news is that SOA fundamentally supports that type of activity. The question is how well the tools support that activity today.

As we move into a more comprehensive cloud set of offerings, we're going to need to federate the different instances of services, metadata, their ownership, the consumption of those pieces, and really formalizing the relationships of using tools between the consumers and providers of those things.

When I say establishing relationships, I think about trading-partner agreements that get put in place, or supply chain agreements. They get put between supply chain partners about what information they're going to share and in what context they can use that.

We're really talking about doing the same kind of formalization with the consumption and providing of these various capabilities, in order for models like SaaS and cloud to scale up to the level that they need to in order to make a significant impact.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Monday, November 24, 2008

Enterprises can leverage cloud models and manage transition risks using service governance, says HP

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

Much has been said about cloud computing in 2008, and still many knowledgeable IT people scratch their heads over what it all really means. They want to know: How can enterprises best prepare to take advantage of this shift in IT resources -- but avoid transitional risks and uncertainty?

Cloud and on-premises compute grids exploit breakthroughs in technology architecture and the confluence of new business models, this is clear. In times when every dollar counts more than ever, the enticements to experiment with cloud models is powerful. At the same time, there is very little margin for error. Adopting new IT approaches can not injure the business interests, image or fiscal performance of any company.

To understand more about the balance and best practices route to emerging cloud and utility IT values, I recently spoke to executives at Hewlett-Packard (HP) and its EDS brethren. They point to the need to understand the role of services oriented architecture (SOA) and governance in exploring cloud opportunities. The future looks promising for cloud adoption, as long as there's a coordinated and managed approach.

To learn more about enterprise cloud adoption, please join the discussion with Rebecca Lawson, Director of Service Management and Cloud Solutions at HP; Scott McClellan, Vice President and Chief Technologist of Scalable Computing and Infrastructure in HP’s Technology Solutions Group, and Norman Lindsey, Chief Architect for Flexible Computing Services at EDS, an HP company.

Here are some excerpts:
Really, from an enterprise point of view, when running mission-critical applications that need security and reliability and are operating with service-level agreements (SLAs), etc., the cloud isn’t quite ready for prime time yet. There are both technical and business reasons why that’s the case.

As far as the idea of the cost savings, it’s good to look at why that is the case in a few certain areas, and then to think about how you can reduce the cost in your own infrastructure by using automation and virtualization technologies that are available today, and that are also used in the “cloud.” But, that doesn’t mean you have to go out to the cloud to automate and virtualize to reduce some cost in your infrastructure.

The cloud is an evolution of other ideas that have come before it, grid, and before that Web services. All these things combine to enable people to start thinking of this as delivering service with a different business model, where we are paying for it by the unit, or in advance, or after the fact.

Virtualization and these other approaches enable the cloud, but they aren’t necessarily the cloud. What IT departments have to do is start to think about what is it they’re trying to accomplish, what business problem they’re trying to address, as they look at cloud providers or cloud technologies to try and help solve those problems.

We’ve seen people do their own private utilities versus public utilities such as flexible computing services provide. The idea of a private utility is that, within an organization, they agree to share resources and allow the boundaries to slide back and forth to hit the best utilization out of the fixed set of assets or maybe a growing set of assets.

The same idea is in a public utility or a public cloud, except that now a third party is providing those assets and providing that as a service. It increases the concerns and considerations that you have to bring to the party. You have to think about problems that you didn’t have to think about when you had a private utility.

When you go to a public space, security is paramount. What do I do with my proprietary information and service levels? How certain can I get what I need when I need it? The promise with the cloud is great, but the uncertainty has caused people to come up short and decide maybe it’s better if I do it myself, versus utilizing an outside service.

We need to think in terms of which services provide what level of value, based on the complexion of that particular company -- and it’s never going to be the same for all companies. Some companies can use Google Gmail as an email service. Other companies wouldn’t touch it with a 10-foot pole, maybe for reasons of security, data integrity, access rights, regulations, or what have you. So weighing the value is going to become the critical thing for IT.

In the longer term, the more overarching impact of cloud comes when your IT department can deliver value back to the business, rather than just taking cost out. Some examples of that are using aspects of social networking and other aspects of cloud computing, and the fact that cloud is delivered over ubiquitous media, the Internet, to increase share of wallet, increase market share, maybe bring higher margin to a business, and build ecosystems, and drive user communities for a business. That’s where cloud brings value to a business and that’s obviously important.

You can start to look around at your internal capabilities, versus external, and make some decisions as to how you want to solve that problem, whether buying an external service or creating a service internally and delivering it to your customers with your own internal utility. ... This will force IT to come closer to the people in the business and really understand what is the business objective, and then find the right service that maps to the value of that objective. Again, we can’t emphasize it enough. This should really change behavioral dynamics in IT and how they think about what their job is.

Basically, within the spectrum of things that are cloud computing, you have everything from infrastructure as a service … all the way up through virtualized infrastructure, a platform on top of that, an application on top of that, or perhaps a completely re-architected true cloud-computing offering.

As you move up that spectrum, I think the benefits increase, but in not all cases are the application domains available in all of those environments. ... What services are available through some cloud model, what model of availability, what are the characteristics of that model, what are the requirements for that particular service – and what are the security performance, continuity integration, and compliance requirements? Those all have to be taken in holistically and through a governance model to make the decision whether we are going to move from the traditional deployment model to a cloud-delivery model, and if so, which one.

In the process of getting to a service-centric IT governance model, they’re going to have to deal with the governance model for deploying new services. Again, I think risk is partly a function of benefit. So when there is a marginal benefit or when the stakes are very high, you would want to be very conservative in terms of your risk profile.

The tougher economic conditions would heighten the acceleration of cloud computing, and not just because of the opportunity to save cost. Reinforcing what we brought up earlier, there are some clear opportunities to bring value to your business.

Examples of that are things like being able to drive user communities, users and consumers of whatever it is your business produces, using techniques of social networking, and things like that.

There is the question of how to use the advantages you get from cloud computing to drive differentiation for your business versus your competitors, because they’re hesitating, or not using it, because they’re being risk-averse. In addition, that compliments the benefits you get from cost savings.

What I really meant is that, if you are an IT shop and you are trying to decide what to move to a cloud paradigm or a cloud model, you’re likely to really focus on the places where either you can get that big win -- because moving this particular service to a cloud paradigm is going to bring you some positive differentiation, some value to your company.

Or, you are going to get that big cost savings from the places where it's the most mission-critical -- the place where you have the least tolerance for downtime, and you have the greatest continuity requirements, or where the performance SLA has been most stringent. The thinking may be, “Well, we’ll tackle that later. We’re not going to take a risk on something like that right now.”

In the places where the risk is not as great -- and the reward either in terms of cost or value looks good -- the current economic conditions are just going to accelerate the adoption of cloud computing in enterprises for those areas. And they definitely do exist.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

For more information on HP Adaptive Infrastructure, go to: www.hp.com/go/ai/.

Clickability offers enhanced Web content management through SaaS media repository

Clickability, a Web content management (WCM) provider, has announced its Clickability Media Solution, designed to provide a centrally managed software-as-a-service (SaaS) content repository aimed at large media companies. The new offering will allow companies to manage all of their content for multiple Web channels from the single repository.

San Francisco-based Clickability currently partners with many of the world's largest media companies to develop revenue-producing online solutions, allowing them to manage large traffic spikes with no investment in hardware or software. The agility that comes from the WCM system allows companies to experiment and innovate without incurring additional cost.

To me, Clickability offers what really should be though of as cloud publishing and advanced media services. Consider that the more media firms that use common services providers like Clickability, the more they can gain common services -- including advertising or even lead generation.

Media companies can offload a lot of their publishing distributions and support services (used to be called circulation), and focus on the content, the audience and the media monetization model. Why should each publisher or title have their own web infrastructure?

In fact, Clickability provides service oriented architecture (SOA) for media companies, and via a SaaS and cloud model, no less. I'm beginning to see that getting to SOA via cloud and SaaS may become more common as economic conditions deteriorate. The cost-benefit analysis simply becomes too compelling. We've seen this approach work with blog publishing, but I think the model runs much deeper and wider.

What's more, under the media cloud model, the infrastructure provider can keep offering new services, such as social networking, advanced semantic search, mobile access, location-based services -- all at a fraction of what each publisher would need to spend to acquire such services on their own.

I also look forward to the day when cloud models start to properly analyze the audience, gain meta data inference on their needs and wants, and provide the information relevance that joins need to solution. At that inception point lies a whiole new business model -- better than advertsing, less costly that traditional lead generation.

Those days are coming and Clickability strikes me as a strong contenter for redefining media based on common infrastructure, lower total costs, and more granular services. Let the media firms produce the content and know their audiences best, while the infrastructure provider handles the common services and explores how to move to transactional-based monetization.

For now, with Clickability Media Solution, companies can begin this cloud ecology journey by tagging and annotating content for efficient search and reuse. They can also link and share content assets across channels and publications. The single repository lets companies create highly targeted micro sites or regional portals that rely on metadata to automatically populate them with appropriate content and contextual links.

Included in the solution are interactive features, such as social networking, blogging, video serving, ticketing, personalized calendars, site customization, and an on-demand ad server that ties to specific pages and sections in a site. It also integrates with a company's existing video platform.

Because it's a SaaS environment, customers can enhance their site and make improvements without spending time on writing new software or installing a dedicated infrastructure. This agility allows companies to take advantage of new opportunities in the market.

Clickability Media Solution
is available immediately.

Tuesday, November 18, 2008

Changing business landscape makes identity and access management key to IT security

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

In an age of significant layoffs and corporate restructuring, the burgeoning problem of identity and access management for IT operations and data centers has escalated into a critical security issue. Managing who gets access to which resources for how long -- and under what circumstances -- has become a huge and thorny problem.

Improper and overextended access to sensitive data and powerful applications can cause massive risk as many employees find themselves in flux.

To learn more about how enterprises can begin coordinated identity and access management strategies, BriefingsDirect's Dana Gardner spoke with Dan Rueckert, worldwide practice director for security and risk management in HP’s Consulting and Integration group; Archie Reed, distinguished technologist in HP’s security office in the Enterprise Storage and Server Group, and Mark Tice, vice president of identity management at Oracle.

Here are some excerpts:
When we look at identity and access management (IAM), we are really saying that the speed of business is increasing, and with that the rate of change of organizations to support their business. You see it everyday in mergers and acquisitions that are going on right now. As a result of that, you see consolidation.

All these different factors are going on. We are also driving regulations and compliance to those regulations on an ongoing basis. When you start to go with these regulations, the ability to have people access their data, or have access to the tools, applications, and data that they need at the right time is key.

The reality in the market is that many things impact that security posture, internally, every time a new system is installed, any product or service defined, or even when a new employee joins. Externally, we're impacted by new regulations, new partnerships, new business ventures, whatever form they may take. All those things can impact our ability, or our security posture.

Security is much like business. That is, it’s impacted by many, many factors, and the problem today is trying to manage that situation. When we get down to tools and requirements around such things as identity management, we are dealing with people who have access to systems. The criticality there is that there have been so many public breaches that we have become aware of recently that security again is a high concern.

When we start thinking about security, one of the first things that people look at generally is some sort of risk analysis. As an example, HP has an analysis toolkit that we offer as a service to help folks decide what is critical to them. It takes all sorts of inputs, the regulations that are impacting your business, the internal drivers to ensure that your business not only is secured, but also moving in the right direction that you wanted to move.

Within this toolkit, called the Information Security Service Management (ISSM) reference model, is a set of tools where we can interview all of the participants, all of the stakeholders in that policy or process, and then look at the other inputs that are predefined, such as the regulations.

[The solution] is definitely people, process, and technology coming together. In some cases, it’s situational, as far as working with customers that have legacy systems, or more modern systems. That starts to dictate how much of that process, how much of that consulting they need, or how much technology?

When we talk about the HP-Oracle relationship, it’s about having that strong foundation as far as IAM, but also the ability to open up to the other areas that it's tied into, in this case enterprise architecture, the middleware pieces that we want for databases, and other applications that they have.

You start to put that thread with IAM, combined with an infrastructure and that opens this up as a whole, which is key. And, enablement, as far as depending on the size and complexity or localization or globalization, tends to play into those attributes, as far as people process and technology.

Even in the virtualization space, where everybody is trying to get more from the same hardware, you cannot ignore things such as access control. When you bring up who has access to that core system, when you bring up who has access to the operating system within the virtual environment, all of those things need to be considered and maintained with the right business and access controls in place.

The only way to do that is by having the right IAM processes and tools that allow an organization to define who gets access to these things, because important processing is happening on the one box. You are no longer just securing the box physically. You're securing the various applications that are stacked on top of all of that.

One of the things that we really work hard to do is make sure that first off, before breaking ground on one of these projects, customers put in place a complete framework, or architecture for their security in identity management, so that they really have a complete design that addresses all of their needs. We then encourage them to take things on one piece at a time. We design for the big bang, but actually recommend implementing on a piece by piece basis.

By having these things that are predefined, not only in terms of being more prescriptive for companies, which helps them a lot, but also being more accessible in terms of how quickly they can decide what's important, allows them to move on and decide in which order they’re going to implement their security strategy.

Those sorts of things allow a company to get up to speed quickly and analyze where they’re at. You may have a security review every year, but a lot of companies need to do it more often in more isolated ways. Having the right tools come out of these sorts of things allows them to do ongoing assessments of where they’re at, as well.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

For more information on HP and Oracle Identity and Access Management.

For more information on HP Secure Advantage.

For more information on HP Adaptive Infrastructure.

SOA, BPM cozy up to desktop with TIBCO, OpenSpan partnership

A technology and business partnership between desktop solutions provider OpenSpan and TIBCO Software helps integrate TIBCO SOA solutions with desktop applications without requiring changes to the programs.

OpenSpan of Alpharetta, Ga. and TIBCO of Palo Alto, Calif. will partner on services-oriented architecture (SOA), business process management (BPM), and business optimization solutions. A number of products from both companies will be used to create broader solutions that provide fuller business productivity-level outcomes.

For example, TIBCO's Enterprise Message Service, a standards-based integration platform, brings together IT assets and communications technologies on a common enterprise backbone to manage the real-time flow of information.

The OpenSpan Platform extends the service by enabling a wide range of applications deployed within enterprise desktop environments to consume services and emit events.

TIBCO's ActiveMatrix, a service platform for heterogeneous SOA delivers service-oriented applications by separating the applications from the technology details. This separation enables companies to incrementally add orchestration, integration, mediation, Java and .NET for services to a unified runtime platform. The OpenSpan Platform enables any application, including legacy Windows, client-server and host applications, running on users’ desktops to become service-enabled and participate in TIBCO SOA solutions. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]

Together the products cover SOA infrastructure requirements while ushering the services to the prevalent clients. The proper paths for SOA workflows and processes out to the user has been a subject of much and varied discourse over the past few years. There is no right answer; the more the better. Even rich documents can be part of a SOA landscape.

The TIBCO iProcess Suite delivers BPM Plus, a unified approach to BPM that enables organizations to automate, optimize and improve any type of process – from routine tasks to mission critical, long-lived processes that involve people, information and applications across organizational and geographical boundaries. OpenSpan extends TIBCO’s BPM capabilities to the desktop.

TIBCO BusinessEvents, allows companies to identify and quantify the impact of events and notify people and systems about meaningful events so processes can be adapted on the fly to capitalize on opportunities and remediate threats. OpenSpan enables applications deployed on corporate desktops to be rapidly instrumented to trigger events.

Solutions-based approaches that leverage multiple vendors capabilities is a hallmark of SOA. It's good to see the vendors recognizing it.

Sunday, November 16, 2008

BriefingsDirect analysts review new SOA governance book, propose scope for U.S. tech czar

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Insights Edition, Vol. 33, a periodic discussion and dissection of software, services, services-oriented-architecture (SOA) and compute cloud-related news and events, with a panel of IT analysts and guests.

In this episode, recorded Nov. 7, our experts examine SOA governance, how to do it right, its scope, its future, and impact. We interview Todd Biske, author of the new Packet Publishing book, SOA Governance. The panel also focuses on the IT policies that an Obama administration should pursue, as well as ruminate about what a cabinet-level IT director appointee might accomplish.

Please join noted IT industry analysts and experts Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum, and Biske, an enterprise architect at Monsanto. Our discussion is hosted and moderated by yours truly, Dana Gardner.

Here are some excerpts:
On SOA governance ...

Biske: The reason that I decided to write a book on this is actually two-fold. First, in my work, both as a consultant, and now as a corporate practitioner, I'm trying to see SOA adoption be successful. The one key thing I always kept coming back to, which would influence the success of the effort the most, was governance. So, I definitely felt that this was a key part of adopting SOA, and if you don't do it right, your chances of success were greatly diminished.

The second part of it was when the publisher actually contacted me about it. I went out and looked and I was shocked to find that there weren't any books on SOA governance. For as long as the SOA trend has been going on now, you would have thought someone would have already written a book on it. I said, "Well, here's an opportunity, and given that it's not really a technology book, it's more of a technology process book, it actually might have some shelf life behind it." So I decided, why not, give a try.

The reason companies should be adopting SOA is that something has to change. There is something about the way IT is working with the rest of the business that isn't operating as efficiently and as productively as it could. And, if there is a change that has to go on, how do you manage that change and how do you make sure it happens? It's not just buying a tool, or applying some new technology. There has to be a more systematic process for how we manage that change, and to me that's all about governance.

If I just blindly say, "We're going to adopt SOA," and I tell all the masses, "Go adopt SOA," and everybody starts building services, I still haven't answered the question, "Why I am doing this, and what do I hope to achieve out of it."

If I don't make that clear, I could easily wind up with a whole bunch of services and building a whole bunch of solutions. I'll have far more moving parts, which are far more difficult to maintain. As a result, I actually go in the opposite direction from where I needed to go. If you don't clearly articulate, "This is the desired behavior. This is why we're adopting SOA," and then let all of the policy decisions start to push that forward, you really are taking a big risk. It's an unknown risk. You're not managing it appropriately if you don't have an end state in mind.

If you look at traditional IT governance, it is more about what projects we execute, how do we fund them, and structuring them appropriately, and that has a relationship to SOA governance. It doesn't go into the deep levels of decisions that are made within those projects.

If you were to try to set up a relationship, I would put IT governance, and even corporate governance, over the SOA governance aspects, at least, the technical side of it. The other piece of that is, when we talk about runtime governance, IT governance probably is focused on the runtime aspects of it. That's really a key part of this, making sure that our systems stay operational and that the operational behavior of the organization is the way we want it to be. So there is a relationship between them.

Baer: My sense is that, given the current economic environment, you're going to see a lot more in the way of tactical projects. ... We need to look at some jump-starts in a sensible, sort of "lite," like, L-I-T-E governance. That's governance that basically federates, or is compatible with, the software-delivery lifecycle. And, when we get to runtime, it's compatible with whatever governance we have at runtime.

The objective of SOA is to achieve reuse, but it's really to achieve business agility. Therefore, whether we shoot for reuse, initially or not, it will not necessarily be the ultimate measure of success for a SOA initiative. SOA Governance Lite would not emphasize very heavily the reuse angle to start off with. You may get to that at Stage 2 in your maturity cycle.

Koblielus: The flip side right now is that you can look at it as a survivor-oriented architecture. You have a survival imperative in tough times. Do you know if your company is going to be around in a year's time? The issue right now in terms of SOA is, "You want to hold on and you want to batten down the hatches. You want to be as efficient as possible. You want to consolidate what you can consolidate in terms of hardware, software, licenses, competency centers, and so forth. And, you're probably going to hold the line on investment, further applications, and so forth."

For SOA, in this survival oriented climate that we're in right now, the issue is not so much reusing what you already have, but holding on to it, so that you are well positioned for the next growth spurt for your business and for the economy, assuming that you will survive long enough. Essentially, SOA Governance Lite uses governance as a throttle, throttling down investments right now to only those that are critical to survive, so that you can throttle up those investments in the future.

Biske: I'm not a believer in the term "lite" governance. I'm of the opinion that you have governance, whether you admit it or not. An alternative view of governance is that it is a decision-rights structure. Someone is always making decision on projects.

The notion of Governance Lite is that we're saying, "Okay, keep those decisions local to the project as much as possible. Don't bubble them up to the big government up there and have all the decisions made in a more centralized fashion." But, no matter what, you always have governance on projects. Whether it's done more at the grassroots level on projects, or by some centralized organization through a more rigid process, it still comes back to having an understanding of what's the desired behavior that we are trying to achieve.

Where you run into problems is when you don't have agreement on what that desired behavior is. If you have that clearly stated, you can have an approach where the project teams are fully enabled to make those decisions on their own, because they put the emphasis on educating them on, "This is what we are trying to achieve, both from a project perspective, as well as from an enterprise perspective, and we expect you to meet both of those goals. And if you run into a problem where you are unsure on priorities, bubble that decision up, but we have given you all the power, all the information you need. So, you're empowered to make those decisions locally, and keep things executing quickly."

Another parallel we can draw to this is the current economic crisis. The risk you have in becoming too federated, and getting too many decisions made locally, is that you lose sight of the bigger picture. You can look at all of these financial institutions that got into the mortgage-backed securities and argue that their main focus was not the stability of the banking system, it was their bottom line and their stock price.

They lost sight of, "We have to keep the financial system stable." There was a risk in pushing too much down to the individual groups without keeping that higher vision and that balance between them. You can get yourself in a lot of trouble. The same thing holds true in [SOA] development.

On PE Obama's technology leader ...

Baer: Obviously, you need somebody who is going to ... think outside the box. Basically, the government has long been a series of lots of boxes or silos, where you have these various fiefdoms. Previous attempts to unify architectures at the agency levels have not always been terribly successful.

The chief priority for anybody who is ... in a CIO-type of role at the cabinet level is ... to look for getting more out of less. That's essential, because there are going to be so many competing needs for so many limited resources. We have to look for someone who can formulate strategic goals -- and I'm going to have to use the term reuse -- to reuse what is there now, and federate what is there now, and federate with as light a touch as possible.

Kobielus: it comes down to the fact that they're driving at many of the same overall objectives that also drive SOA initiatives. One initiative is to breakdown silos in terms of information sharing between the government and the citizenship, but also silos internally within the government, between the various agencies to help them better exchange information, share expertise, and so forth. In fact, if we look at their position statement called "Bring government into the 21st century," it really seems that it's part of the overall modernization push for IT and the government. They're talking really about a federated SOA governance infrastructure or a set of best practices.

Tech modernization in the government is absolutely essential. Reuse and breaking down silos between agencies is critically important. Brokering best practices across the agencies, specific silo IT and CTO organizations, is critically important. It sounds to me as if Obama will be an SOA President, although he doesn't realize it yet, if he puts in place the approach that he laid out about a year ago, considering that the IT infrastructure in the government is probably right now the least of his concerns.

Biske: [Obama] definitely has a challenge, and I am thinking from a governance perspective. He has taken step one, in that the paragraph that Jim just mentioned, of bringing government into the 21st Century. He has articulated that this is the way that he wants our systems to interact and share information with the constituents.

The next step is the policies that are going to get us there, and obviously he's time-boxed by the terms of his presidency. He's got a big challenge ahead of him, or at least the CTO that gets appointed has a huge challenge. Somehow, you have to break it down into what goals are going to be achievable in that timeframe.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Friday, November 14, 2008

Interview: rPath’s Billy Marshall on how enterprises can virtualize applications as a precursor to cloud computing

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.

Read complete transcript of the discussion.

Many enterprises are factoring how to bring more applications into a virtual development and deployment environment to save on operating costs and to take advantage of service oriented architectures (SOA) and cloud computing models.

Finding proven deployment methods and governance for managing virtualized applications across a lifecycle is an essential ingredient in making SOA and cloud-computing approaches as productive as possible while avoiding risk and complexity. The goal is to avoid having to rewrite code in order for applications to work across multiple clouds -- public, private or hybrids.

The cloud forces the older notion of "write-once, run anywhere" into a new level of "deploy correctly so you can exploit the benefits of cloud choices and save a lot of money."

To learn more about how enterprises should begin moving to application-level virtualization that serves as an onramp to cloud benefits, I recently spoke with Billy Marshall, founder and chief strategy officer of rPath.

Here are some excerpts:
We're once again facing a similar situation now where enterprises are taking a very tough look at their data center expenditures and expansions that they're planning for the data center. ... The [economic downturn] is going to have folks looking very hard at large-scale outlays of capital for data centers.

I believe that will be a catalyst for folks to consider a variable-cost approach to using infrastructures or service, perhaps platform as a service (PaaS). All these things roll up under the notion of cloud.

Virtualization provides isolation for applications running their own logical server, their own virtual server. ... Virtualization gives you -- from a business perspective -- an opportunity to decouple the definition of the application from the system that it runs on. ... Then, at run-time, you can decide where you have capacity that best meets needs of the profile of an application.

I can begin sourcing infrastructure a little more dynamically, based upon the load that I see. Maybe I can spend less on the capital associated with my own data center, because with my application defined as this independent unit, separate from the physical infrastructure I'll be able to buy infrastructure on demand from Amazon, Rackspace, GoGrid, these folks who are now offering up these virtualized clouds of servers.

That's the architecture we're evolving toward. ... For legacy applications, there's not going to be much opportunity. [But] they may actually consider this for new applications that would get some level of benefit by being close to other services.

[If] I can define my application as a working unit, I may be able to choose between Amazon or my internal architecture that perhaps has a VMware basis, or a Rackspace, GoGrid, or BlueLock offering.

Another big consideration for these enterprises now is do I have workloads that I'm comfortable running on Linux right now, and so can I a take a step forward and bind Linux to the workload in order to take it to wherever I want it to go.

rPath brings a capability around defining applications as virtual machines (VMs), going through a process whereby you release those VMs to run on whichever cloud of your choosing, whether a hypervisor virtualized cloud of machines, such as what's provided by Amazon, or what you can build internally using Citrix XenSource or something like VMware's virtual infrastructure.

It then provides an infrastructure for managing those VMs through their lifecycle for things such as updates for backup and for configuration of certain services on the machines in a way that's optimized to run a virtualized cloud of systems. We specialize in optimizing applications to run as VMs on a cloud or virtualized infrastructure.

With our technology, we enforce a set of policies that we learned were best practices during our days at Red Hat when constructing an operating system. We've got some 50 to 60 policies that get enforced at build time, when you are building the VM. They're things like don't allow any dangling symlinks, and closing the dependency loop around all of the binary packages to get included. There could be other more corporate-specific policies that need to be included, and you would write those policies into the build system in order to build these VMs.

It's very similar to the way you put policies into your application lifecycle management (ALM) build system when you were building the application binary. You would enforce policy at build time to build the binary. We're simply suggesting that you extend that discipline of ALM to include policies associated with building VMs. There's a real opportunity here to close the gap between applications and operations by having much of what is typically been done in installing an application and taking it through Dev, QA and Test, and having that be part of an automated build system for creating VMs.

People are still thinking about the operating system as something that they bind to the infrastructure. In the new case, they're binding the operating system to the hypervisor and then installing the application on top of it. If the hypervisor is now this bottom layer, and if it provides all the management utilities associated with managing the physical infrastructure, you now get an opportunity to rethink the operating system as something that you bind to the application.

When you bind an operating system to an application, you're able to eliminate anything that is not relevant to that application. Typically, we see a surface area shrinking to about 10 percent of what is typically deployed as a standard operating system. So, the first thing is to package the application in a way that is optimized to run in a VM. We offer a product called rBuilder that enables just that functionality.

If you prove to yourself that you can do this, that you can run [applications] in both places (cloud and on-premises), you've architected correctly. ... That puts you in a position where eventually you could run that application on your local cloud or virtualized environment and then, for those lumpy demand periods -- when you need that exterior scale and capacity -- you might just look to that cloud provider to support that application [at scale].

There's a trap here. If you become dependent on something associated with a particular infrastructure set or a particular hypervisor, you preclude any use in the future of things that don't have that hypervisor involved. ... The real opportunity here is to separate the application-virtualization approach from the actual virtualization technology to avoid the lock-in, the lack of choice.

If you do it right, and if you think about application virtualization as an approach that frees your application from the infrastructure, there is a ton of benefit in terms of dynamic business capability that is going to be available to your organization.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.