Monday, February 9, 2009

Interview: The Open Group's Allen Brown on advancing value of enterprise IT via architecture

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Read a full transcript of the discussion.

The Open Group, a vendor- and technology-neutral consortium, last week delivered TOGAF 9 at the organization's 21st Enterprise Architecture Practitioners Conference in San Diego.

At the juncture of this new major release of the venerable enterprise IT architecture framework, it makes sense to examine the present and future of The Open Group itself, some of its goals, and what else it does for its members.

The global organization is actively certifying thousands of IT architecture practitioners, while using the commercial license to increase the contributor flow of best architecture practices back into TOGAF. Think of it as open source for best architecture principles and methods.

To better understand how The Open Group operates and drives value to its members, I recently interviewed Allen Brown, president and CEO of The Open Group.

Here are some excerpts:
The role of architecture is more important right now because of the complexity, because of the need to integrate across organizations and with business partners. You've got a situation where some of the member companies are integrated with more than a thousand other business partners. So, it's difficult to know where the parameters and boundaries of the organization are.

If you've trained everyone within your organization to use TOGAF, they're all speaking a common language and they're using a common approach. It's a common way of doing things. If you're bringing in systems integrators and contractors, and they are TOGAF certified also, they've got that same approach. If you're training people, you can train them more easily, because everyone speaks the same language.

One member I was talking to said that they've got something like 500,000 individuals inside their infrastructure that are not their own staff. So this is a concern that's becoming top of mind for CIOs: Who's in my infrastructure and what are they doing.

We've got, on one hand, the need for enterprise architecture to actually understand what's going on, to be able to map it, to be able to improve the processes, to retire the applications, and to drive forward on different processes. We've also got the rising need for security and security standards. Because you're integrated with other organizations, they need to be common standards.

... Security is now becoming top of mind for many CIOs. Many of them have the integration stuff sorted out. They've got processes in place for that, and they know how they're going to move forward with enterprise architecture. They're looking for more guidance and better standards -- and that's why TOGAF 9 is there.

We're now looking at other areas. We always look at new areas and see whether there is something unique that The Open Group could contribute where we can collaborate with other organizations and where we can actually help move things forward.

We're looking at cloud. We don't know if it's something that we can contribute to at this point, but we're examining it and we will continue to examine it.

The Open Group is broader than just enterprise architecture. The architecture forum is one of a number of forums including Security/Identity Management, the Platform, the UNIX standards, Real-Time and Embedded Systems, Enterprise Management Standards, and so forth. A lot of attention has been focused on enterprise architecture, because of the way that TOGAF has contributed, and some of the professional standards have raised.

TOGAF 9 really needed to add some more to TOGAF 8. In March 2007, I did a survey by talking to our members -- really just asking them open-ended questions. What are the key priorities you want from the next version of TOGAF? They said, "We need better guidance on how to align with our business and be able to cascade from that business down to what the IT guys need to deliver. We need more guidance, we need it simpler to use."

TOGAF 8 was very much focused on giving guidance on how to do enterprise architecture, and the key thing was the architecture development method. What they've done now is provide more guidance on how to do it, made it more modular, and made it easier to consume in bite-sized chunks.

Those were the two key driving forces behind where we were going, a more modular structure, and things like that. Trying to do those things, the members focused on how to bring that forward, and it's taken a lot of work.

Then they've added other things like a content framework. The content framework provides a meta model for how you can map from the architecture development method, but it also provides the foundation for tools vendors to construct tools that would be helpful for architects to work with TOGAF.

There are a couple of other things that we've done. First, we've introduced the IT Architect Certification (ITAC) program. That provides a certification to say not only that this person knows how to do architecture, but can demonstrate it to their peers.

... We've had to deal with much larger numbers of members and contributors, but it's not just TOGAF. It's not just a case of having a framework, a method, or a way of helping organizations do enterprise architecture. We're also concerned with raising the level of professionalism.

The ITAC certification is agnostic on method and framework. You don't have to know TOGAF to do that, but you have to be able to convince a board-level review that you do have experience and that you're worthy of being called an IT architect.

It requires a very substantial resume, and a very substantial review by peers to say that this person actually does know, and can demonstrate they've got the skills to do IT architecture.

If you can imagine a large consortium where you've got 300 member organizations -- which is a lot of people at the end of the day -- and everyone is contributing something and a smaller number is doing a real heavy lifting, you've got to get consensus around it. They have done a huge amount of work.

There is a capability framework, not a maturity model, but it's way of helping folks to set up their capability. There are a lot of things that now in TOGAF 9 that have built on the success of TOGAF 8, it has taken a huge amount of work by our members.

The great thing about TOGAF 9 is that we've had such a great reception from the analysts, bloggers, and so on. Many of them are giving us recommendations, and they say, "This is great, and here are my recommendations for where you go."

We've got to gather a lot of that together, and the architecture forum, the members, will take a look at that and then figure out where the plan goes. I know that they're going to be working on things more general, as well as TOGAF in the architecture space.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on security trends and needs

Panel discussion on cloud computing and enterprise architecture


Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information


TOGAF 9 Commercial Licensing program information

Citrix brings high definition to virtual desktops with XenDesktop 3 and HDX

Citrix Systems last week delivered a one-two punch in the battle for virtual desktop infrastructure (VDI) differentiation with the introduction of XenDesktop 3 and Citrix HDX high-definition technology, promising to lower costs associated with servers and storage in the data center.

XenDesktop 3
, a key component of the Santa Clara, Calif. company's Citrix Delivery Center now incorporates several of the HDX technologies, providing a richer multimedia experience for user and increasing the number of desktops per server. According to the announcement, the latest version of XenDesktop can also host twice as many virtual desktops from a single server.

Also, the new version can deliver Microsoft Windows desktops from a common set of centrally managed images that can be run either as a hosted application in the data center or locally on a PC or thin-client device.

Another feature is the HDX media streaming capability, by which Desktop 3 sends compressed media streams to endpoint devices and plays them locally. This allows IT administrators to have the applications run wherever it's more efficient and cost effective.

I'm still curious about Abode Flash presentations via XenDesktop VDI. Desktone and Wyse have been working on that for some time. Wyse is also partnering with Citrix, but I didn't see any mention of Flash (perhaps in a genuflection to Microsoft?).

Management of VDI is also simplified in XenDesktop 3 with a fully integrated profile management, which provides a consistent personalized experience for each user every time they log in.

The new features include broad support for smart-card security, which moves virtual desktop capability in those markets -- government, financial services, and healthcare -- that rely on smart-card authentication.

Rounding out the new XenDesktop capabilities is USB plug-and-play capability for transparent support of all types of local devices, including digital cameras, smart phones, MP3 players, and scanners.

I'm a big fan of VDI and think it offers even more in a strapped economy. If netbooks are all the rage, then why not VDI too (or VDI on older PCs that can't well run Vista or Windows 7?)

Also announced Wednesday was the HDX high-definition technology, which adds enhancements for multimedia, voice, video, and 3D graphics. It also includes “adaptive orchestration” technology that senses underlying capabilities in the data center, network and device, and dynamically optimizes performance across the end-to-end delivery system to fit each unique user scenario. This allows HDX-enabled products to leverage the latest user experience innovations developed by third-party software, server, device and processor partners.

Six categories of HDX technologies work together to provide multimedia capability. These include a broad range of new and existing technologies that extend throughout the Citrix Delivery Center product family.
  • HDX MediaStream – Accelerates multimedia performance by sending compressed streams to endpoints and playing them locally.
  • HDX RealTime – Enhances real-time communications using advanced bi-directional encoding and streaming technologies to ensure a no compromise end-user experience.
  • HDX 3D – Optimizes the performance of everything from graphics-intensive 2D environments to advanced 3D geospatial applications using software and hardware based rendering in the datacenter and on the device.
  • HDX Plug-n-Play – Enables simple connectivity for all local devices in a virtualized environment, including USB, multi-monitor, printers and user-installed peripherals.
  • HDX Broadcast – Ensures reliable, high-performance acceleration of virtual desktops and applications over any network, including high-latency and low-bandwidth environments.
  • HDX IntelliCache – Optimizes performance and network utilization for multiple users by caching bandwidth intensive data and graphics throughout the infrastructure and transparently delivering them as needed from the most efficient location.
Citrix XenDesktop 3 will be generally available from authorized Citrix partners this month, and from the Citrix website at http://www.citrix.com/xendesktop. Suggested retail pricing begins at $75 per concurrent user.

Monday, February 2, 2009

Open Group debuts TOGAF 9, a free IT architecture framework milestone that allows easier ramp-up, clearer business benefits

As part of the 21st Enterprise Architecture Practitioners Conference here in San Diego this week, The Open Group has delivered TOGAF version 9, a significant upgrade to the enterprise IT architecture framework that adds modularity, business benefits, deeper support via the Architecture Development Method (ADM) for SOA and cloud, and a meta-model that makes managing IT and business resources easier and more coordinated.

One of my favorite sayings is: "Architecture is destiny." This is more true than ever, but the recession and complexity in enterprise IT departments make the discipline needed to approach IT from the architecture level even more daunting to muster and achieve. Oh, and slashed budgets have a challenging aspect of their own.

Yet, at the same time, more enterprise architects are being certified than ever. More qualified IT managers and planners are available for hire. And more dictates such as ITIL are making architecture central, where it belongs, not peripheral. The increased use of SOA, beginnings of cloud use, and need for pervasive security also auger well for enterprise architecture (EA) to blossom even in tough times.

TOGAF 9 aims to remove the daunting aspects of EA adoption while heightening both the IT and business value from achieving good methods for applying a defined IT architecture. With a free download and a new modular format to foster EA framework use from a variety of entry points, TOGAF 9 is designed to move. It also begins to form an uber EA framework by working well with other established EA frameworks, for a federated architectural framework benefit. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

I'll be blogging and creating some sponsored podcasts here in San Diego this week at the Enterprise Architecture Practitioners Conference, so look for updates on keynotes, panel discussions and interviews.

I'm especially interested in how architecture and the use of repositories help manage change. This may end up the biggest financial and productivity payback from those approaching IT from a systemic and managed via policies and governance approach.

Well-structured EA repositories of both IT and business meta-model descriptions solves complexity, adds agility, saves puts organizations in future-proof position. They can more readily accept and adapt to change -- both planned for an unplanned. Highly unpredictable and dynamic business environments benefit from EA and repository approach, clearly.

TOGAF 9 is showing the maturity for much wider adoption. The Architecture Development Method (ADM) can be applied to SOA, security, cloud, hybrids, and federated services ecologies. There is ease in migration from earlier TOGAFs, or from a start fresh across multiple paths of elements of EA. Indeed, TOGAF 9's modular structure now allows all kinds of organizations and cultures to adapt TOGAF in ways that suit specific situations and IT landscapes.

The Open Group is a vendor-neutral and technology-neutral consortium, and some 7,500 individuals are TOGAF certified. So far, 90,000 copies of the TOGAF framework have been downloaded from The Open Group’s website and more than 20,000 hard copies of the TOGAF series have been sold.

If architecture is destiny, that TOGAF is a philosophy on taking control of your IT destiny. Better for you to take control of your destiny, than the other way around, I always say.

Sunday, February 1, 2009

Progress Software's Actional Diagnostics gives developers better view into services integrity

Progress Software has leveraged technology from recently acquired Mindreef's SOAPscope to help detect and mend service integrity issues early in the software development cycle.

The Bedford, Mass. company last week announced the development of Progress Actional Diagnostics. This standalone quality and validation desktop product allows developers to build and test XML-based services, including SOAP, REST, and POX. Once services are identified for use, developers can inspect, invoke, test, and create simulations for prototyping and problem solving. [Disclosure: Progress is a sponsor of BriefingsDirect podcasts.]

Progress also last week held its annual analyst conference in Boston, and it was clear from the presentations that the plucky company is expanding its role and its value to the business solutions level.

As a major OEM provider to other software makers, the Progress brand has often played second fiddle to other brands in the Progress stable (some from acquisitions), such as Sonic, Actional, Apama, DataDirect, IONA, and FUSE. But the company is working to make Progress more identifiable, especially at a business level.

Progress, TIBCO and Sofware AG are the remaining second-tier software infrastructure providers, following a decade-long acquisitions spree and consolidation period.

As such Progress, with annual revenues in the $500 million range, is also setting itself up to move from SOA and SaaS support to take those capabilities and solutions (and OEM model) to the cloud model. Among a slew of glowing customer testimonials at the conference last week, EMC showed how a significant portion of its burgeoning cloud offerings that are powered by Progress infrastructure products.

I think we can expect more love between EMC and Progress, as well as more Progress solutions (in modular, best of breed or larger holistic deployments) finding a place under the hood of more cloud offerings. That will be double apparant as the larger players like IBM, Oracle, and Microsoft create their own clouds. We're heading into some serius channel conflicts as these clouds compete with a rapidly fracturing market.

I was also impressed with the OSGi support that Progress is bringing to market, something that should appeal to many developers and architects alike.

Back on the product news, Actional Diagnostics includes a new feature called Application X-Ray, which allows developers to see what happens inside their service. For example, they can see how downstream services are being used, what messages are sent on queues, details of Enterprise JavaBean (EJB) invocations, database queries, and other relevant interdependencies along their transaction path.

This helps them identify why tests have failed or why services are not performing as designed, so that a service can be reengineered as needed before it moves to production.

In addition, load checking lets users test the performance and scalability of services before they are delivered to a performance testing team. Developers can check dozens of simultaneous threads or users per service, monitor CPU utilization and how much Java VM is being used. These are the kinds of integrity backstops that will be in high demand in the cloud and for PaaS buildouts.

Actional Diagnostics is currently in beta testing with customers and will soon be available as a free download. Developers interested in being sent an alert when the software download is available can register at: http://www.progress.com/web/global/alert-actional-diagnostics/index.ssp.

Wednesday, January 28, 2009

Visibility and control over API use is crucial as enterprises ramp to SaaS and cloud models

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Sonoa Systems.

Read a full transcript of the discussion.

As established enterprise IT expectations meet up with cutting-edge cloud delivery models, there's a clear need for additional trust and maturity in order for enterprises to further adopt cloud-based services. Enterprise IT expectations on visibility, control, and security to software as a service (SaaS), and cloud-based applications delivery need tools that manage the applications use and the use patterns for providers.

This podcast examines how one SaaS provider, Innotas, has developed a more matured view into services operations and application programming interfaces (APIs) and how they can extend the benefits from that visibility to their customers. We'll hear how Innotas, an on-demand project portfolio management (PPM) service, derives more analytics from network activity and thereby provides mounting confidence in how services are performing.

To better understand how Innotas has better managed services based on service level agreements (SLAs) monitoring, I recently interviewed Tim Madewell, vice president of operations at Innotas, as well as Chet Kapoor, CEO of Sonoa Systems.

Here are some excerpts:
Innotas is an on-demand PPM solution. We focus on IT organizations and provide software access via a standard Web browser for managing projects, as well as non-project work within an IT department. ... One of our differentiators was that being on-demand and multi-tenant from day one enabled us to be one of the early adopters in the SaaS world and in subscription-based software.

We have seen how the attitude around SaaS has matured and evolved. SaaS has become more standard and available, and as the technology has matured, especially around security, the acceptance level for SaaS has improved. One of the things that benefit us is in focusing on IT. Typically this type of change in acceptance for software starts within the IT organization itself.

To be a business application in a SaaS model today means that you have to step up and be enterprise class. We look at ourselves as an extension of all of our customers' internal IT and operations groups and we need to live up to those same standards. ... Once we get past the initial security challenges, folks are very interested and concerned about reliability and performance.

When [applications were] traditionally inside your four walls, there was a greater sense of control. As soon as you step into the cloud or with any SaaS provider, some of the benefits and the value proposition is that they control it, they manage it for you, but you're giving up some control. Building that confidence and acceptance into the solution is important, and ties back to being enterprise class.

Sonoa helped me identify problems or potential problems earlier. When I turned up the ServiceNet product it decoupled the traffic from my Web users, my end-users, the traditional users from my back end, and from my API.

That visibility gave me some input into when my servers were getting hot or heating up. I was seeing a lot of activity and started to differentiate if this activity was generated through the front end or through the back end.

So, my immediate return was to give my operations team a solution and a tool that gives them better visibility and then to control some of that traffic on the back-end. ... With this visibility I'm able to put in some controls that will give me the ability to look at how I make more and better use of the capacity that I have today.

You always start by wanting to see the needle, because you can’t move the needle, if you don’t see it. ... I want to know who is using my service, what are they using it for, how long are they using it, things like that. You have to have visibility into the services you provide.

The next thing you say is, "Okay, now that I have visibility, I want to start putting in some security access control." ... And you want to start by saying, "I want to give priority access to priority customers." ... And, they want it to be available at a scale where all their customers are getting it.

We've been working with companies like Innotas to get them through this evolution. Some customers choose to get our technology in the form of appliances. Some of them do it in the form of software, as Tim has. And, some of our customers are choosing to get our technology right in the cloud itself where they do not have any data-center whatsoever.

The easier we can make it for enterprises to access the information for their composite applications through APIs, the more successful companies like Innotas are, and there is more adoption. IT and enterprises end up saving money.

We're very familiar with the different user types in an application. You may have view-only users, standard users, or power users. We can take the same view on the back end with Web-services. There are certainly different levels of users or different levels of service you could provide for users, depending on their needs. ... Now, I've got the ability to take a look at offering some tiered services or tailoring my back-end user type and then tying that to my revenue model.

[Enterprise] customers will write applications or custom applications, where they probably want to use Oracle or SAP inside the firewall and maybe have another custom application of some sort, Innotas or Salesforce.com or whatever -- outside. They want to write a composite application, a mashup, or whatever you decide to call it, and they want all these different services.

A critical need that we find is that customers start to get nervous. It's not so much with the Innotases of the world, because they are fairly secure. They run like an enterprise application, but it’s available in the cloud. It happens when you start using things like Amazon Elastic Compute Cloud (EC2), and people are starting to put custom applications there. ... They probably do it in a very hybrid model because I don’t think on-premise computing is going away.

What we’re finding is there is a need for a way to govern what goes on outside the enterprise. Govern could be a fairly heavy word, so let me be more specific. You want to have visibility into, how many accounts I have at EC2, for example. ... They want to have some visibility into what is happening with the cloud. Then, as they get more visibility, they want to see if they are paying extra for SLAs and how the SLAs are being mapped.

The second aspect of this is that it's probably a new revenue stream for Web 2.0 and SaaS companies, as well as enterprises. They've maximized or have worked very hard on their channels, whether user access or a browser-based channel. Now, they have an opportunity to go after a different set of folks who are trying to not just go off and use Innotas through a browser or Salesforce.com through a browser.

If you really think about the person who is doing a mash up, every consumer is probably going to be a provider at some point, and every provider is going to be a consumer at some point. ... [We] have been working on taking what Sonoa provides with a ServiceNet product, and making it available as a service. We have some customers that are already going in production. It's something that we will start talking about in the very near future.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Sonoa Systems.

Monday, January 26, 2009

BriefingsDirect analysts discuss Service Oriented Communications, debate how dead SOA really is

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 36, a periodic discussion and dissection of software, services, Service Oriented Architecture (SOA) and compute cloud-related news and events with a panel of IT analysts.

In this episode, recorded Jan. 12, 2009, our guests examine what might keep SOA alive and vibrant -- the ability for the architectural approach to grow inclusive of service types like service-oriented communications (SOC).

We also visit the purported demise of large-scale SOA to calibrate the life span of SOA -- is it dead or alive?

Please join noted IT industry analysts and experts Todd Landry, vice president of NEC Sphere; Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Joe McKendrick, independent analyst and prolific blogger; Dave Linthicum, founder of Blue Mountain Labs; JP Morgenthal, senior analyst at Burton Group, and Anne Thomas Manes, vice president and research director at Burton Group.

Our discussion is hosted and moderated by BriefingDirect's Dana Gardner.

Here are some excerpts:
Taking pulse of SOA ...

Manes: Certainly, lots of people have refuted my claim [that large-scale SOA is dead]. At the same time, I've had at least as many people, and probably more, I am dead-on right. My goal with the blog post was to at least get the conversation going, and I think I managed to do that effectively.

I still believe that if you go before a funding board this year -- if you are an IT group and you are trying to get funding for some projects -- and you go forward with a proposal that says we need to do SOA, because SOA is good, it’s going to get shot down. Instead, what you have to go forward with is very specific value-add projects that say we need to do this, we need to do that, and we need to do that.

You need to talk about what services you're going to provide. In the example of communications services, there's a really strong value proposition associated with creating communications services. Likewise, going forward with a request to say, "We need to build a billing service which replaces the 27 different billing capabilities that we have in each of our product applications out there."

That’s a very strong, financially rich, good ROI type of proposal that’s going to win. But, it's not going to work, if you go forward and just say, "Oh, we need to go get an ESB. We need to go get some registry and repository technologies. We need to invest in all the SOA infrastructure. We need to do SOA just because SOA is what everybody is telling me we need to do."

Just talk about the services and talk about the practices that are going to help improve the architecture of your systems. Talk about doing application rationalization and talk about reducing the redundancy within your environment.

Talk about dismantling the 47 data warehouses you have that contain customer information and create a set of data services instead that actually gives you a richer, cleaner and more complete information about your customers. Those are things that are going to win.

... One of my favorite comments that came back from the blog post were the number of people who said, "Basically, we just really suck at doing architecture."

One of the primary reasons that a lot of SOA initiatives are failing is because people don’t actually do the architecture. Instead, what they do is service-oriented integration, as opposed to SOA. If you're truly doing architecture, then you're doing an analysis of your applications architecture, figuring out why you have so much extra garbage in your environment, and figuring out what you should actually start to get rid of.

... The folks who have a little more architectural maturity recognize the value of taking this opportunity, when lots and lots of projects are no longer going forward. They can say, "Well, now is a great time for us to start focusing on architecture and figure out how we can position ourselves to take advantage of the economy, when it does finally turn around."

Baer: I think what Anne is saying right now is that organizations that did get ahead of the curve with SOA, that thoughtfully began the architecture process and rationalized it, will go ahead, because there will be real economies at some point compared to traditional application development.

McKendrick: I've always said that the companies that have gravitated toward SOA are the companies that will probably do well anyway. Those are the companies with more visionary management and more tightly integrated approaches to business. Those are the companies that we've seen in all the case studies that over recent years that have gravitated toward SOA. Let’s face it, if they didn’t have SOA, they probably would have been doing okay anyway, because they're well-managed companies.

The companies that really could have used SOA, the companies not likely to be adopting SOA, or not likely be looking at SOA, as Anne and Tony discussed, those are the hunker-down companies, the companies that have fairly unsatisfactory architectures or no architectural approach.

Linthicum: There are companies out there that have some very good IT talent, and they can take SOA, WOA, or cloud computing, look at the business problems, make some very nice systems, and automate the business nicely.

However, the majority of people out there who are wrestling around with architecture are ill-equipped to solve some of the issues. They have a tendency to focus in wrong areas. Anne hit this in her blog as well. It was brilliant.

In the area of, "Let’s do quick tactical things, and look at this as a big systemic issue we are looking to solve," it just becomes too big, too complex. They try to solve it with things that are too tactical and just don’t have enough value. There are no free lunches with SOA or any kind of an architectural approach or any thing we have to improve the business.

You're going to have to break things down to their functional primitive and build them up again. You're going to have to think long and hard about how your architecture relates and links back to the business and how that’s going to work.

I wish there were something you could buy in a box or something you could download or some cloud you can connect to, but at the end of the day it’s the talent of the people who are doing the job. That’s where people have been falling down. Over and over again, in the last three years, we have identified this. I don’t think anybody has taken steps to improve it. In fact, I think it’s gotten worse.

Kobielus: We all know the real-world implementation problems with SOA, the way it’s been developed and presented and discussed in the industry. The core of it is services. As Anne indicated, services are the unit of governance that SOA begot.

We all now focus on services. Now, we’re moving into the world of cloud computing and you know what? A nebulous environment has gotten even more nebulous. The issues with governance, services, and the cloud -- everything is a service in the cloud. So, how do you govern everything? How do you federate public and private clouds? How do you control mashups and so forth? How do you deal with the issues like virtual machine sprawl?

The range of issues now being thrown into the big SOA hopper under the cloud paradigm is just growing, and the fatigue is going to grow, and the disillusionment is going to grow with the very concept of SOA.

Manes: My core recommendation is to think big and take small steps.

You need to do the planning, and your architecture team should be able to do that, without having to go get permission from your funding organization to do planning, because that’s what they’re supposed to be doing. But then, they have to identify quick, short, tactical projects that will actually deliver value.

That’s what they should do and are designed to do to improve the architecture as a whole. It can't be just, "Oh I have to integrate this system with that system." They really should be focusing on identifying projects that will, in fact, improve the architecture. In that way, you’ll be in a better position when things are over.

How service oriented communications has evolved ...


Landry: ... On any given day in a business, do people care about doing the mashup or do they care about having their business be more effective, especially in these times? We believe that people will continue to look for more efficiency in their IT infrastructure. They'll continue to look for how people can be more connected, not only internally but with their customers. At the end of the day, you're right. It’s really about how people get more interconnected with the business process.

... If you look at any implementation and then what happens in the business, the real connective tissue between all of these includes people. The decisions and actions that take place in a business on a day-to-day basis are highly dependent on these people being effective.

Therefore, the manner in which we can help them with their communications and help them collaborate becomes a critical factor in how the workflows can be more effective and more efficient. We've looked at that and said the more you can make communications into business applications, the more you can make communications a more natural part of an SOA.

... [We] had to communicate to the industry the concept of how communications integrates into frameworks in the IT infrastructure. SOA is a one term still used out there to define an approach. When we built our communications platform, we opened up all its services in a manner that we believe fit very naturally into the concept of a SOA. Therefore, our communications platform is really more service oriented than it is a closed and proprietary traditional PBX-oriented system.

... The idea of being able to click-to-call has been around for quite some time. With the more recent technologies mashing up the directory listings, mashing up a call function inside of a business application, is much more achievable and can be done much easier manner than it has in the past.

Baer: The idea of being able to manage and integrate spoken communications may actually be a critical gap in compliance strategy. I could see that as being an incredible justification for trying to integrate voice communications. Another instance would be with any type of real-time supply chain or with trading.

Kobielus: I see SOC as very much an important extension of SOA or an application of SOA, where the service that you're trying to maximize, share, and use is the intelligence that’s in people’s heads -- the people in the organization, in your team. You have various ways in which you can get access to that knowledge and intelligence, one of which is by tapping into a common social networking environment.

In a consumer sphere, the thing is the intelligence you want to gain access to is intelligence sets residing in mobile assets -- human beings on the run. Human beings have various devices and applications through which they can get access to all manner of content and through which they can get access to each other.

So, in a consumer world, a lot of the SOC value proposition is in how it supports social networking. The Facebook environments provide an ever more service-oriented environment within which people can mash up not only their presence and profiles, but all of the content the human beings generate on the fly. Possibly, they can tag on the fly as well, and that might be relevant to other people.

There is a strong potential for SOC and that consumer Facebook-style paradigm of sharing everybody’s user-generated content that’s developed on the fly.

Linthicum: ... The fact of the matter is that people are just getting their arms around exactly what a service is and how you take multiple services and turn them in solutions. ... If you're going to take services like this, expose them as services, and make easier use of them ... then you have to create the integration yourself through very disparate mechanisms and things like that. People are always struggling, trying to figure how to aggregate this [SOC] stuff and its solutions.

Morgenthal: I'd been working with a number of companies who had warehouse issues, and we were basically normalizing those issues by instituting a new services architecture and layering that on top of that legacy system, so they could build their business processes.

One of the biggest issue was they were still communicating exceptions that were happening in the warehouse because device limitations were scanners and text in a very noisy environment. Everyone agreed that the best communications tool in that environment was their cell phone because it vibrated. Well, the Blackberry now has vibration too. So, that’s also a valid form of communication.

If you tie this as a unified communications strategy to the business process, it’s very effective and not only is it very effective ... We expect things in microseconds. So it's enhancing the expectations of people in general because of that. But still, I think overall productivity goes up tremendously, and we move much more effectively toward a real-time event architecture across communications and systems and people. It’s really fascinating to watch and it’s very effective.

Manes: When we're talking about communications services, you want to make sure that those services are very easy to access. With communications services, when you start looking inside PBXs, voice over IP, and those kinds of things, that’s arcane and completely out of the realm of normal development skills that you would get in a Web developer.

Now, we do have some nice capabilities like click to call, and those are set up as drop-in components that people can now use inside their Web applications. Wouldn’t it be nice, if we actually had a much more powerful communications service that a developer can use to communicate with a customer, communicate with a shop manager, or communicate with whatever at this point in the application?

They can call out to a communications service and specify, "Here is who I want to talk to. Here is the information I want to send. And, here is the method through which I want send it." And, and then they can have the communications service completely take care of the whole processes associated with making that work.

I can guarantee that a developer is going to choose that over, "Oh, I have to write all kinds of arcane code in order to figure out how to send an email or how to launch a phone call." So, building these services that simplify a very complex process is extremely valuable from a productivity perspective.

Landry: ... There's is another piece of this that says these platforms are bringing together multiple forms of media, so that you can utilize text messaging, audio, or video communications. You can do screen-sharing data collaboration in a simpler and more consistent fashion and you can utilize one set of services to do that.

Whether they're deployed as a cloud and the enterprise is using those services from within a cloud or whether they've made the decision to do them on premises, both are very viable and, in many cases, both are being done today.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Thursday, January 22, 2009

Case study: IT repositories help Wachovia manage change amid complex bank consolidation

Disclaimer: The views expressed in the following are not necessarily those of Wells Fargo & Co. or any of its subsidiaries or affiliates.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read a full transcript of the case study discussion.

When large businesses need to change fast, the IT systems need to do more than keep up with change -- they need to manage, define and secure it. IT repositories are now effectively orchestrating multiple enterprise systems of record that must quickly operate together as result of massive mergers and acquisitions.

Using such repositories, or groups of repositories, IT and business assets can be quickly federated and integrated for business process alignment and consolidation. Furthermore, these processes can be managed centrally via policies and governance definitions, even across far-flung global operations.

To better understand the value and opportunity in using IT repositories to manage change in complex business environments, I recently discussed a case study at Wachovia, which is now merging with Wells Fargo. To help understand the role of repositories amid this merger, I spoke with Harry Karr and Hemesh Yadav, both IT architects at Wachovia.

Here are some excerpts:
A repository solution has more than one physical repository, and each one has certain specific information or a slice of the data. All together, it gives us a good enterprise solution for a repository and gives us a picture of what we have.

We have more distributed systems now. We have services being offered by a half dozen or a dozen different service containers. We have many different clients hitting those services. We have many more pieces to the puzzle than we had before, and they're all owned by different people, different groups, and different teams.

Keeping up with that is much harder than it used to be with a single monolithic type of application, as in knowing where the touch points are, what the integration needs are, and where the security mechanisms are applied. There are a lot of things you have to know between the applications.

If something isn't written down, you've lost it. It's not going to be there. What we need to do is make sure that we have a record of what's there, so that anybody in the bank can go back and look and say, "We have this at this point, and these are the touch points involved, this is the security, and these are the access requirements." Anything they need to know about those touch points can be known from that repository solution.

The hardest part is keeping track of what we have, especially in times of mergers and acquisitions, but also at any other time. When we are trying to add new functionality, the first thing you have to know is what you have in place. So, keeping that up to date, knowing what we have is probably the biggest challenge.

There's no value at all in putting information in a repository. The value is when we get the information out, and in order to get it out, you have to be able to query it. Having it in with a consistent taxonomy and consistent metadata is the only way that you can get the information back out again.

In production and troubleshooting ... you need to know what changes have happened. What's going on with that application? What's changed since the last time it was running properly? Without all that tie-in from all those different repositories, you lose track of what you have, and it helps every single lifecycle. ... Testing needs to match the business requirements. If those requirements are not in a repository, are they being handed over on a notebook somewhere? Where do they exist? Repository helps a great deal there.

It's important to look at the whole picture. They need to look at what's important between all the different repositories. You need to have some way of storing your business-process model. That includes business rules, services, information about your systems of record, information about the data, contracts, who's using what, requirements for change management, SLA management, problem management, organizational structure, and process flows.

All those different repositories need to have touch points. Mapping that out ahead of time will give you an idea of what to do with any one of those, as you put each one in place.

[The repository solution] is going to have a lot of benefits. If you can make the business case for governance of any sort, then the repository goes hand in hand with that governance -- being able to track what you are doing, your processes, everything involved. The repository is a key piece of the governance. I don't think that anybody would disagree that governance has a great business case behind it, and the repository is part of that governance model.

Everybody talks about alignment between IT and business. The repository is the key piece of that. In order to have some kind of alignment, you have to have visibility, and the repository gives you that visibility.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Disclaimer: The views expressed in the podcast are not necessarily those of Wells Fargo & Co. or any of its subsidiaries or affiliates.

Wednesday, January 21, 2009

Services consumers and developers must now mount pressure for cloud computing neutrality

Sure, most people instantly get the need for network neutrality, but what about cloud neutrality?

Just like we'd be loathe to tolerate any one (often the only available) Internet provider from qualitatively managing our traffic and packets use based on their singular business objectives, we should also be concerned about any cloud provider exerting too much influence or setting de facto standards early on that diminish the cloud services market as a whole.

Now, the Obama Administration has enough on its plate, so I'm not advocating any regulatory or commerce enforcement policies to define or advocate cloud neutrality. But I do think it's important to foster an open market and encourage early adopters -- especially developers and independent cloud services providers -- to vote mindfully with their participation (and dollars) to establish and nurture broad openness and interoperability practices among the burgeoning cloud entities on the Internet.

If an open Internet has been good for sustained productivity and innovation, which few refute, why wouldn't cloud services also benefit from an open market environment -- at least through a formative stage (or two)? Wouldn't what's good for the popularity of the pipes also be good for promoting the widest consumption of the water?

Let's still favor the advance of general productivity on the Internet over more narrow commercial interests, even as we enter the cloud services phase of the Internet, eh?

Shouldn't a network infrastructure often described as "public" -- hence the common icon of the Internet as a puffy cloud -- become the substrate for an intensely fertile marketplace, and just not a handy subsidy for any number of, albeit competing, roach motels? The best form of competition comes from the hotels competing but also with low barriers of entry and exit over a long period of time. Choice is essential, not just among vendors but on how those vendors behave as a group.

The cloud services marketplace is not just a new Monopoly board in the sky, it's still the product of the World Wide Web. If you have to go through the cloud to reach the services, then the services themselves are a product of the cloud, and not the other way around.

Things in the nascent cloud services ecology are moving rapidly, so now's the time to set the proper perspective on what works best for the buyers and users of cloud services, as well as the commercial interests setting up shop along the Information Superhighway. Remember that metaphor for the Internet? I think we should think of a cloud superhighway in the same way. Now it's more than information, but it's just as or more important to the public good. There's a public interest in seeing this succeed for the highway travelers, which include big businesses, as well as those few building the toll booths.

There are some dramatic recent developments that point to how rapidly cloud things are shaping up:
  • IBM's Lotus brand is bringing a lot of what we know as Notes/Domino services, a longtime enterprise groupware leader, to cloud-based delivery. Think of it as a big nest of .nsfs in the ether (and that's data up there, folks!).
  • Engine Yard's Vertebra has extended cloud neutrality into its Ruby and Rails development and deployment solutions. Write anywhere, run anywhere, change anywhere, integrate anywhere ... repeat.
  • Sun Microsystems buys Q-Layer. Let's hope that Sun gets cloud "open" from the start this time, unlike the 12-year Java will be open someday saga (and keep those license fees coming).
There have been warnings about a potential and troubling lack of choice in cloud options, notably from Richard Stallman. And there have been major movements by vendors not known for their allegiance to openness first and profits later, including Apple and Microsoft, into the cloud model.

So even though things are moving fast and at the most impactful levels of the global IT business, there's very little being said and done about preserving the neutrality of the Internet economy for the cloud economy. And I know it's hard to actually define neutrality. But like pornography, I know it when I see it.

Better yet, I know non-neutrality when I see it. We should all be on the lookout for non-neutrality in the cloud ecologies, and seek and reward alternatives. Blog about these distinctions. Look to the decades-old Internet example for guidance. It really worked and keeps working.

That does not mean in any way outlawing good old fashioned capitalism in the cloud ecosystem. It means making savvy choices that favor data portability, and recognizing that APIs that carry over from one hosting provider to another make for good market drivers that entice more consumers that can exercise more choice. The pie needs to grow first, and the market leaders can seek domination in some way later when the playing filed is established and perhaps somewhat level.

Enterprises and small to medium size business especially should advance their long term interests as they examine and adopt cloud-based services to make sure they are not trading short-term savings in a recession for long-term IT lock-in. Once you're in the roach motel, you can't get out. And they can raise the rent (maintenance fees) to just below your cost of exercising painful choice for a long time. You may be familiar with this IT supplier dynamic.

There is a better path, and we've seen it with the Web: A modest, market-driven level of mutually beneficial interoperability of services and applications, data portability in its deepest forms, SLAs that clearly spell out the terms of engagement and what is acceptable in terms of services and data ownership.

These cloud terms of engagement will be tough and complex. We're in some uncharted territory here. Can you own a business process even if the cloud provider owns the constituent services? Yes, I believe you can, and should. Get it in writing, though.

So more than any regulations or broad policy dictates on the best practices for cloud computing, we need good licenses and a clear and understood framework for cloud ecology best practices that protects the users and developers, as well as the providers. The goal is to make strong enticements for all the participants in the ecology, not just a few or in a grossly inequitable way. We'll need escape clauses, too, just in case.

Indeed, the value and probity of cloud use licenses must be weighed against the IT total cost equation, including the cost of switching and the costs of integration. That is, if I get cloud services cheap, how much will that cost me in the long run? And is this and does this become a better deal than the traditional on-premises, per processor or application licensing models?

In short, we need the ability to calculate the cost-benefit analysis of modern IT that includes the new cloud computing options. And therefore we need to know the true costs of cloud computing -- including how open it really is -- to proceed. The more open, the less risk, and so the more overall adoption based on an understood cost-benefit projection.

Let's look at cloud services as hugely promising, perhaps the best alternative for IT resources and support for a number of applications types and certain business use cases. But let's not get lulled into treating a cloud provider relationship any different from any other business deal. Let's get the terms down, and vote well as consumers. It's in the best interest of the vendors, too, they just can't do this without us. Literally.

Let's leverage the fact that the Internet has set a powerful and essential precedent that upholds and protects an online market's open development as fundamentally more important than any one company's ability to stake out a claim and horde all the gold dust. Open markets are the best way to allow the miners, prospectors, shovel sellers, and real estate interests to all grow and prosper. And openness will allow the cloud market to reach its full potential fast, through unfettered innovation from all quarters.

Like with the Web and Internet over the past 15 years, the power of choice and unfettered innovation and dynamism of sufficiently neutral cloud markets should be the best guide of how the cloud future shakes out productively. In this economy we really need a new and huge productivity boost from IT lest we all get pulled into the downward spiral.