Wednesday, August 12, 2009

Cloud computing proves a natural for offloading time-consuming test and development processes

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Learn more. Sponsor: Electric Cloud.

Our latest podcast discussion centers on using cloud computing technologies and models to improve the test and development stages of applications' creation and refinement. One area of cloud computing that has really taken off and generated a lot of interest is the development test and performance proofing of applications -- all from an elastic cloud services fabric.

The build and test basis of development have traditionally proven complex, expensive, and inefficient. Periodic bursts of demand on runtime and build resources are the norm. By using a cloud approach, the demand burst can be accommodated better through dynamic resources, pooling, and provisioning.

We've seen this done internally for development projects and now we're starting to see it applied increasingly to external cloud resource providers like Amazon Web Services. And Microsoft is getting into the act too.

To help explain the benefits of cloud models for development services and how to begin experimenting and leveraging external and internal clouds -- perhaps in combination -- for test resource demand and efficiency, I recently interviewed Martin Van Ryswyk, vice president of engineering at Electric Cloud, and Mike Maciag, CEO at Electric Cloud.

Here are some excerpts:
Van Ryswyk: Folks have always wanted their builds to be fast and organized and to be done with as little hardware as possible. We've always struggled to get enough resources applied to the build process.

One of the big changes is that folks like Amazon have come along and really made this accessible to a much wider set of build teams. The dev and test problem really lends itself to what's been provided by these new cloud players.

Maciag: The traditional approaches of the overnight build, or even to the point of what people refer to as continuous integration, have fallen short, because they find problems too late. The best world is where engineers or developers find problems before they even check in their code and go to a preflight model, where they can run builds and tests on production class systems before checking in code in the source code control system.

Van Ryswyk: At a certain point, you just want it to happen like a factory. You want to be able to have builds run automatically. That's what ElectricCommander does. It orchestrates that whole process, tying in all the different tools, the software configuration management (SCM) tools, defect tracking tools, reporting tools, and artifact management -- all of that -- to make it happen automatically.

And that's really where the cloud part comes in. ... Then, you're bringing it all back together for a cohesive end report, which says, "Yes, the build worked." ElectricCommander was already allowing customers to manage the heterogeneity on physical machines and virtual machines (VMs). With some integrations we've added you can now extend that into the cloud.

There will be times when you need a physical machine, there will be times when your virtual environment is right, and there will be times when the cloud environment is right. ... We may not want to put our source code out in the cloud but we can use 500 machines for few hours to do some load, performance, or user interface testing. That's a perfect model for us.

... When you have these short duration storms of activity that sometimes require hundreds and hundreds of computers to do the kind of testing you want to do, you can rent it, and just use what you need. Then, as soon as you're done with your test storm, it goes away and you're back to the baseline of what you use on average.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Learn more. Sponsor: Electric Cloud.

VMware fleshes out its cloud computing support model with SpringSource grab

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

VMware’s proposed $362 million acquisition of SpringSource is all about getting serious in competing with Salesforce.com and Google App Engine as the Platform-as-a-Service (PaaS) cloud with the technology that everybody already uses.

This acquisition was a means to an end, pairing two companies that could not be less alike. VMware is a household name, sells software through traditional commercial licenses, and markets to IT operations. SpringSource is a grassroots, open source developer-oriented firm whose business is a cottage industry by comparison. The cloud brought both companies together that each faced complementary limitations on their growth. VMware needed to grow out beyond its hardware virtualization niche if it was to regain its groove, while SpringSource needed to grow up and find deeper pockets to become anything more than a popular niche player.

The fact is that providing a virtualization engine, even if you pad it with management utilities that act like an operating system, is still a raw cloud with little pull unless you go higher up in the stack. Raw clouds have their appeal only to vendors that resell capacity or enterprise large firms with the deep benches of infrastructure expertise to run their own virtual environments. For the rest of us, we need a player that provides a deployment environment, handles the plumbing, that is married to a development environment. That is what Salesforce’s Force.com and Google’s App Engine are all about. VMware’s gambit is in a way very similar to Microsoft’s Software + Services strategy: use the software and platforms that you are already used to, rather than some new

The most glaring omission is need for Java object distributed caching to provide yet another alternative to scalability.

environment in a cloud setting. There’s nothing less familiar to large IT environments than VMware’s ESX virtualization engine, and in the Java community, there’s nothing more familiar than the Spring framework which – according to the company – accounts for roughly half of all Java installations.

With roughly $60 million in stock options for SpringSource’s 150-person staff, VMware is intent on keeping the people as it knows nothing about the Java virtualization business. Normally, we’d question a deal like this because the company’s are so dissimilar. But the fact that they are complementary pieces to a PaaS offering gives the combination stickiness.

For instance, VMware’s vSphere’s cloud management environment (in a fit of bravado, VMware calls it a cloud OS) can understand resource consumption of VM containers; with SpringSource, it gets to peer inside the black box and understand why those containers are hogging resource. That provides more flexibility and smarts for optimizing virtualization strategies, and can help cloud customers answer the question: do we need to spin out more VMs, perform some load balancing, or re-apportion all those Spring TC (Tomcat) servlet containers?

The addition of SpringSource also complements VMware’s cloud portfolio in other ways. In his blog about the deal, SpringSource CEO Rod Johnson noted that the idea of pairing VMware’s Lab Manager (that’s the test lab automation piece that VMware picked up through the Akimbi acquisition) proved highly popular with Spring framework customers. In actuality, if you extend Lab manager from simply spinning out images of testbeds to spinning out runtime containers, you would have VMware’s answer to IBM’s recently-introduced WebSphere Cloudburst appliance.

VMware isn’t finished however. The most glaring omission is need for Java object distributed caching to provide yet another alternative to scalability. If you only rely on spinning out more VMs, you get a highly rigid one-dimensional cloud that will not provide the economies of scale and flexibility that clouds are supposed to provide. So we wouldn’t be surprised if GigaSpaces or Terracotta might be next in VMware’s acquisition plans.

This guest post comes courtesy of Tony Baer’s OnStrategies blog . Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

Monday, August 10, 2009

BriefingsDirect analysts debate the 'imminent death' of enterprise IT as cloud models ascend

Download or view the transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 43. Our topic centers on the pending purported death of corporate IT. You may recall that in the early 1990s, IT pundits glibly also predicted that the plug would be pulled on the last mainframe in 1996. It didn't happen.

The mainframe continues to support many significant portions of corporate IT functions. But these sentiments are newly rekindled and expanded these days through the mounting expectations that cloud computing and software-as-a-service (SaaS) will hasten the death of on-premises enterprise IT.

Some of the analyst reports these days indicate that hundreds of billions of dollars in IT spending will soon pass through the doors of corporate IT and into the arms of various cloud-service providers. We might conclude that IT is indeed about to expire.

Not all of us, however, subscribe to this extent in the pace of the demise of on-premises systems, their ongoing upkeep, maintenance, and support. To help us better understand the actual future role of IT on the actual floors inside of actual companies, we're joined by our guests and analysts this week: Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Brad Shimmin, principal analyst at Current Analysis; Ron Schmelzer, senior analyst, ZapThink; Sandy Rogers, former program director at IDC, and now independent IT analyst and consultant, and, as our guest this week, Alex Neihaus, vice president of marketing at Active Endpoints.

Here are some excerpts:
Kobielus: I can predict right now, based on my conversations with Forrester customers, and specifically my career in data warehousing and business intelligence (BI), that this notion of the death of IT is way too premature, along the lines of the famous Mark Twain quote.

... There aren't a substantial number of enterprises that have outsourced their data warehouse or their marts. [But] I think 2011 will see a substantial number of data warehouses deployed into the cloud.

The component of your data-warehousing environment that will be outsourced to public cloud, initially, in many cases, will not be your whole data warehouse. Rather it will be a staging layer, where you're staging a lot of data that's structured and unstructured and that you're pulling from both internal systems, blogs, RSS feeds, and the whole social networking world -- clickstream data and the like.

Baer: I just completed actually a similar study in application lifecycle management (ALM), and I did find that that cloud certainly is transforming the market. It's still at the very early stages, but ... two areas really stuck out. One is anything collaborative in nature, where you need to communicate -- especially as development teams go more global and more distributed -- ... [and] planning, budgeting, asset management, project portfolio management, and all those collaborative functions did very well [in the cloud].

Another side that did very well ... is anything that had very dynamic resource needs where today you need a lot of resource, tomorrow you don't. A good example of that is testing -- if you are a corporate IT department, which has periodic releases, where you have peaks and valleys in terms of when you need to test and do regression test.

[But] I got a lot of reluctance out there to do anything regarding coding in the cloud. ... So, in terms of IT being dead, well, at least with regard to cloud and on-premise, that's hardly the case in ALM.

Shimmin: Because I follow the collaboration area, I see [cloud adoption] happening much, much more quickly. ... Those are the functions that IT would love to get rid of. It's like a diseased appendix. I would just love to get rid of having to manage Exchange Servers. Any of us who have touched any of those beasts can attest to that.

So, even though I'm a recovering cynic and I kind of bounce between "the cloud is just all hype" and "yes, the cloud is going to be our savior," for some things like collaboration, where it already had a lot of acceptance, it's going to drive a lot of costs [out].

Schmelzer: It's really interesting. If you look at when most of the major IT shifts happen, it's almost always during period of economic recession. ... Companies are like, "I hate the systems I have. I'm trying to deal with inefficiency. There must be something wrong we're doing. Let's find some other way to do it." Then, we go ahead and find some new way to do it. Of course, it doesn't really solve all of our problems.

The cost-saving benefit of cloud is clearly there. That's part of the reason there is so much attention on it. People don't want to be investing their own money in their own infrastructure. They want to be leveraging economies of scale, and one of the great things that clouds do is provide that economy of scale. ... On the whole question of IT, the investments, and what's going to happen with corporate enterprise IT, I think we're going to see much bigger changes on the organizational side than the technological side. It’s hard for companies to get rid of stuff they have invested billions of dollars in.

... IT organizations will become a lot smaller. I don't really believe in 4,000-person IT organization, whose primary job is to keep the machines running. That's very industrial revolution, isn't it?

Rogers: I see enterprises all the time that are caught between a rock and a hard place, where they have specialized technologies that were built out in the client-server era. They haven't been able to find any replacements.

The ability to take a legacy system that may be very specialized, far reaching, have a lot of integrations and dependencies with two other systems, is a very difficult change. ... When we're talking about cloud and SaaS, it's going to impact different layers. ... We may want to think about leveraging other systems and infrastructure, more of the server, more of the data center layer, but there is going to be a huge number of implications as you move up the stack, especially in the middleware and integration space.

We're still at the very beginning stages of leveraging services and SOA, when you look at the mass market. ... There's a lot of work that needs to be done to just think about turning something off, turning something on, and thinking that you are going to be able to rely on it the same way that you've relied on the systems that have been developed internally. It's not to say it won't be done, but it certainly has a big learning curve that the whole industry will be engaging in.

Neihaus: What we find more interesting is not the question of whether the cloud will subsume IT, or IT will subsume the cloud, but who should be creating applications? ... There is a larger question today of whether end users can use these technologies to completely go around IT and create their own applications themselves.

For us, that seems to be the ultimate disingenuousness, the ultimate inability for all

For us, the cosmic question is whether we are really at the point where end users can take elements that exist in the cloud and their own data centers and create processes and applications that run their business themselves.

the reasons that everyone discussed. ... The question really is whether the combination of these technologies can be made to foster a new level of collaboration in enterprises where, frankly, IT isn't going to go away. The most rapid adoption of these technologies, we think, is in improving the way IT responds in new ways, and in more clever ways, with lot more end-user input, into creating and deploying applications.

For us, the cosmic question is whether we are really at the point where end users can take elements that exist in the cloud and their own data centers and create processes and applications that run their business themselves. And our response is that that's probably not the case, and it's probably not going to be the case anytime soon. If, in fact, it were the case, it would still be the wrong thing to do in enterprises, because I am not sure many CEOs want their business end users being IT.

Kobielus: You need strong governance to keep this massive cloud sandbox from just becoming absolute chaos.

So, it's the IT group, of course, doing what they do best, or what they prefer to be doing, which is architecture, planning, best practices, templates, governance control, oversight support, and the whole nine yards to make sure that, as you deal in new platforms for process and data, such as the cloud, those platforms are wrapped with strong governance.

Baer: You can't provide users the ability to mash-up assets and start creating reports without putting some sort of boundary around it.

This is process-related, which is basically instituting strong governance and having policies that say, "Okay, you can use these types of assets or data under these scenarios, and these roles can access this and share this."

Rogers: The sophistication of the solution interfaces and the management in the administrative capabilities to enable governance, are very nascent in the cloud offerings. That's an opportunity for vendors to approach this. There's an increasing need to compose and integrate silos within organizations. That has a huge implication on governance activities.

Gardner: I'd like to go around the table, on a scale of 1 to 10 where do you think we're going to see the IT department's role in three years -- with 1 being IT is dead, and 10 being IT is alive, robust, and growing vibrantly?

Kobielus: I'll be really wishy-washy and give it a 5. ... IT will be alive, kicking, robust, and migrating toward more of a pure planning, architecture, and best practices function.

Much of the actual guts of IT within an organization will migrate to hosted environments, and much of the development will be done by end users and power users. I think that's writing on the wall.

Baer: I am going to give it an 8. ... I don't see IT's role diminishing. There may be a lower headcount, but that can just as much be attributed to a new technology that provides certain capabilities to end users and also using some external services. But, that's independent of whether there's a role for IT, and I think it pretty much still has a role.

Shimmin: I'm giving it a 7 for similar reasons, I think that it's going to scale back in size little bit, but it's not going to diminish in value.
IT is not going to go away. I don't think IT is going to be suffering. IT is just a continuously changing thing. ... I think it's going to be very much alive, but the value is going to be more of a managerial role working with partners. Also, the role is

In some enterprises, IT is in deep trouble if they do not embrace new technologies and new opportunities and become an adviser to the business.

changing to be more of business analysts, if you will, working with their end users too. Those end users are both customers and developers, in some ways, rather than these guys just running around, rebooting Exchange servers to keep the green lights blinking.

Schmelzer: I think it's 10. IT is not going to go away. I don't think IT is going to be suffering. ... I guarantee that whatever it looks like, it will be still as important as an IT organization.

Rogers: Probably in the 7 to 8 range. ... In some enterprises, IT is in deep trouble if they do not embrace new technologies and new opportunities and become an adviser to the business. So it comes down to the transition of IT in understanding all the tools and capabilities that they have at their disposal to get accomplished what they need to.

Some enterprises will be in rough shape. The biggest changeover is the vendor community. They are in the midst of changing over from being technology purveyors to solution and service purveyors. That's where the big shift is going to happen in three years.

Neihaus: Our self-interest is in a thriving a segment of IT, because that's who we serve. So, I rate it as a 10 for all of the reasons that the much-more-distinguished-than-I panel has articulated. The role of IT is always changing and impacted by the technologies around it, but I don't think that that could be used as an argument that it's going to diminish its importance or its capabilities really inside organizations.

Gardner: Well, I'll go last and I'll of course cheat, because I'm going to break it into two questions. I think their importance will be as high or higher, so 8 to 10, but their budget, the percent of spend that they're able to derive from the total revenues of the organization, will be going down. The pressure will be on from a price and monetary budgeting perspective, so the role of IT will probably be down around 4.
Download or view the transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Friday, August 7, 2009

Cloud pushes enterprise architects' role beyond IT into business process optimization czar

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Sponsor: The Open Group.

Welcome to a special sponsored podcast discussion coming from The Open Group’s 23rd Enterprise Architecture Practitioners Conference in Toronto. This podcast, part of a series from the July 2009 event, centers on the fast-changing role and expanding impact of enterprise architecture (EA).

The enterprise architect role is in flux, especially as we consider the heightening interest in cloud computing. The down economy has also focused IT spending to seek out faster, better, and cheaper means to acquire and manage IT functions and business processes.

As service components shift in their origins and delivery models, the task of meeting or exceeding business requirements based on these services becomes all the more complicated. Business outcomes and business processes become the focus, yet they may span many aspects of IT, service providers and the business units and partners/suppliers involved.

The new services era calls for powerful architects who can define, govern, and adjust all of the necessary ingredients. This new process czar role must creatively support and improve a business process lifecycle over many years.

Yet who or what will step into this gulf between the traditional means of IT and the new cloud ecology of services? The architect's role, still a work in progress at many enterprises, may well become the key office where the buck stops in this new era.

What then should be the role, and therefore what is the new opportunity for enterprise architects? Here to lead the way in understanding the evolving EA issue, we're joined by our panel, Tim Westbrock, managing director of EAdirections; Sandy Kemsley, an independent IT analyst and architect; and John Gotze, international president for the Association of Enterprise Architects. The discussion is moderated by me, BriefingsDirect's Dana Gardner.

Here are some excerpts:
Kemsley: I work a lot with companies to help them implement business process management (BPM) solutions, so I get involved in architecture things, because you're touching all parts of the organization. ... A lot of very tactical solution architects are working on a particular project, but they're not thinking about the bigger picture.

... In many organizations, architecture is not done all that well. It's done on an ad hoc basis. It's done at more of the deep technical level. I can understand why the anti-architecture people get frustrated with that type of architecture, because it's not really EA.

Westbrock: The more strategic enterprise architects depend on the strategic nature of the executives of the organization. If we're going to bring it into layers of abstraction, they don't go more than a layer or two down from strategy. ... One of the good transformations, or evolutionary steps that I have seen in enterprise architects is less of a technology-only focus. Enterprise architect used to be synonymous with some kind of a technology architect, a platform architect, or a network architect, and now you are seeing broader enterprise architects.

Gotze: [The down economy] is helping to change the focus in EA from the more tactical to the more strategical issues. I've seen this downturn in the economy before. It's reinforcing the changes in the discipline, and EA is becoming more and more of a strategic effort in the enterprise.

There are some who call us enterprise architects by profession, and this group at The Open Group conference is primarily people who are practitioners as enterprise architects. But, the role of EA is widening, and, by and large, I would say the chief executive is also an enterprise architect, especially with the downturn.

Westbrock: I still don't think business architecture is within the domain of most IT enterprise architects. ... There are some different drivers that are getting some organizations to think more holistically about how the business operates. ... Modeling means we need architects. We're getting involved in some of these more transformational elements, and because of that, need to look at the business. As that evolves more, you might see more business ownership of enterprise architects. I don't see it a lot right now.

Kemsley: In many of the companies that I work with ... there is this struggle between the IT architects and/or the enterprise architects, who are really IT architects, looking at, how we need to bring things in from the cloud and how we need to make use of services outside.

They're vowing to have all of that come through IT, through the technology side. This puts a huge amount of overhead on it, both from a governance standpoint, but also from an operational standpoint. That's causing a lot of issues. If you don't get EA out of IT, you're going to have those issues as you start going outside the organization [for services].

... It's the ones who are starting to regenerate their architect community internally -- both with business architects and with architects on the IT side -- who can bring these ideas about cloud computing. [It's about] using business process modeling notation (BPMN) that can be done by the business architects and even business people, as opposed to having all of that type of work done in the IT area.

Gotze: The IT department will not disappear, of course. It's naive to say that IT doesn't matter. It's not the point that IT is irrelevant or anything, but it's the emphasis on the strategic benefits for the enterprise.

The whole notion of business-IT alignment ... is yesterday's concern. Now it's more about thinking about the coherent enterprise, that everything should fit together. It's not just alignment. You can have perfectly well aligned systems and processes, without having a coherent enterprise. So, the focus basically must be on coherency in the enterprise.

Westbrock: I don't think that this is a new problem. ... The difference between '80s and '90s and now is that it's not a chain with seven big links. It's an intricate network with hundreds, if not thousands of pieces. ... That adds complexity an element of governance that we need to mature toward. ... Where is that expertise going to come from? How are we going to capture which vendors that popped up this week are still going to be around next week?

Kemsley: The ones that can handle this new world of complexity well are ones that can bring some of the older aspects of governance, because you still have to worry about the legacy systems and all of the things that you have internally. You're not going to throw that stuff away tomorrow and bring in some completely new architecture. But, you need to start bringing in these new ideas.

Gotze: There will be a standardization and certification [process for architects]. That will not go away. ... [But it's at] the strategic level of architecture where you must have an emphasis on innovation and diversity to make it work.

... It will be some kind of hybrid model. Look at how government is working with it.

What's missing is somebody with this portfolio, meaning holistic, enterprise-wide view of what services we need, what services we have, where we can go get other services -- basically the services portfolio. Enterprise architects are uniquely positioned to do that justice.

They are enterprises after all -- it's not just the private sector. There's much more emphasis in government about getting all the agencies and departments to work together and to understand each other.

Westbrock: We're still decades away from any kind of maturity in the business architecture space, whether that be method, process, or organization. But, we're now at the point where more standardization in the applications or solutions and the data or information layers is going to help us with this particular challenge that's facing enterprise architects.

... I don’t think that the expectations for most enterprise architects are to enable business transformation. In most organizations that I deal with it’s to help with better solutions here and there. It’s to do some technology research and mash it up against business capabilities. It’s not this grand vision that I think most of us have as enterprise architects in the profession of what we can accomplish.

Kemsley: I don’t see the business leadership clamoring to take over architecture anytime soon. ... You're not going to get the CEO coming in and saying on day one, "Oh, I want to takeover that architecture stuff."

Gotze: That’s also because we in the profession have managed to create a vocabulary that's nearly impossible-to-understand for people outside the profession. I think the executive leadership will want to take over the work that the strategic EA is doing. They might not call it EA, but they will be the ultimate architect. The CEO is the ultimate chief architect for a forward-looking and innovative enterprise.

Kemsley: We have to learn to use EA power for good, rather than evil, though. In a lot of cases, it’s just about implementation. It’s sort of downward looking. Enterprise architects tend to look down into the layers rather than, as Tim was saying, feed it back up to the layers above that.

Westbrock: When we talk to folks about the kinds of capabilities, skills, and credentials that they're looking for in enterprise architects, deep technical ability is nowhere on the list. It's not because that deep technical ability is not useful. It's because generally people that are performing those deep technical task lack the breadth of experience that make enterprise architects good.

They have that deep technical knowledge, because they've done that a long time. They've become experts in that silo. ... [But] the folks that are going to be called to function as enterprise architects are folks that need a much broader set of skills and experience.

Gotze: I agree. The deep technical skills will come way down the list. Communication is very high on the list -- understanding, contracting, and so on, because we have the cloud and similar stuff also very high on the list.

Westbrock: The folks that have been successful are the ones that take the time to do two things. They build artifacts and processes that work down, they build artifacts and processes that work up, and they realize that they're different. You don't build an artifact for a developer and take that to a member of the board. You don't build project design review processes and then say, "Okay, we're going to apply that same process at the portfolio level or at the department level."

We don't have communication strategies that are going to facilitate the broadcast of results to the people that use the standards, and then use the same strategy and modes of communication for attaining strategic understanding of business drivers. It's really been a separation, knowing that there's a whole different set of languages and models and artifacts that we need here and a whole different set here.

... There is a huge opportunity for enterprise architects relative to not just the cloud. The cloud is just one more of the enablers of service orientation, not SOA, but service orientation.

Somebody needs to own the services portfolio. Maybe we're going to call them the "Chief Services Architect." I don't know. But, what I see in so many organizations is service oriented infrastructure being controlled by one group, doing a good job of putting in place the kinds of foundational elements that we need to be able to do service orientation.

What's missing is somebody with this portfolio, meaning holistic, enterprise-wide view of what services we need, what services we have, where we can go get other services -- basically the services portfolio. Enterprise architects are uniquely positioned to do that justice.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Sponsor: The Open Group.

Information management targets e-discovery, compliance, legal risks while delivering long-term BI and governance benefits

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Learn more. Sponsor: Hewlett Packard.

Losing control over information sprawl at enterprises can cause long-term inefficiencies. But it's the short-term legal headaches of not being prepared for E-discovery requests that have caught many firms off-guard.

Potentially massive savings can be had from thwarting legal discovery fulfillment problems in advance by governing and managing information. In a sponsored podcast, I recently examined how the well-managed -- versus the haphazard -- information oversight approach reduces legal risks. Yet these same management lifecycle approaches bring long-term payoffs through better analytics, and regulatory compliance, while reducing the cost of data storage and archiving.

Better understand the perils and promise around information management with guests Jonathan Martin, Vice President and General Manager for Information Management at HP, and Gaynelle Jones, Discovery Counsel at HP. The discussion is moderated by me, BriefingsDirect's Dana Gardner.

Here are some excerpts:
Martin: Over the last five to 10 years, we've become increasingly addicted to information, both at home and at work. ... and the size of it is beginning to really impact businesses. This trend is that information tends to either double every year in developing countries, and tends to double about every 18 months in developed organizations. Today, we're creating more information than we have ever created before, and we tend to be using limited tools to manage that information.

We're getting less business value from the information that we create. ... Unfortunately, in the last 18 months or so, the economy has begun to slow down, so that concept of just throwing more and more capacity at the problem is causing challenges for organizations. Today, organizations aren't looking at expanding the amount of information that's stored. They're actually looking at new ways to reduce the amount of information.

Coming into 2010, both in the US and in Europe, there is going to be a new wave of a regulation that organizations are going to have to take on board about how they manage their business information.

Jones: Because we have black-letter law that computerized data is discoverable if relevant, and because of the enormous amount of electronic information that we are dealing with, litigants have to be concerned with discovery, in identifying and producing it, and making sure it's admissible.

I'm charged here [at HP] with developing and working with both the IT and the litigation teams around making sure that we are compliant, and that we respond quickly to identify our electronically stored information, and that we get it in a form that can be produced in the litigation.

There are horror stories that have been in the news in recent years around major companies such as Morgan Stanley, Enron, Qualcomm and a host of others being sanctioned for not following and complying properly with the discovery rules. ... In each case, companies failed to properly implement litigation rules, directly pointing to their failure to properly manage their electronic information. So the sanctions and the penalties can be enormous if we don't get a hold of this and comply.

We've seen, over the last few years, organizations move from taking a very reactive approach on these kinds of issues to a more proactive or more of a robust approach.

Martin: You have to be able to identify and manage the information and think

Over the last two to three years, organizations have begun to take a more proactive approach.

ahead about where you're likely to have to pull it in and produce it, and make a plan for addressing these issues before you have to actually respond. When you're managing a lot of litigation, you have to respond in a quick timeframe, by law. You don't have time to then sit down and draw up your plan.

[Not being prepared] makes the process at least twice as expensive, than if you've planned ahead, strategized, and know where your information was, so that when the time comes, you could find it and preserve it and produce it.

Over the last two to three years, organizations have begun to take a more proactive approach. They're gathering the content that's most likely to be used in an audit, or that's most likely to be used in a legal matter, and consolidating that into one location. They're indexing it in the same way and setting a retention schedule for it, so that when they're required to respond to litigation or are required to respond to an audit or a governance request, all the information is in one place. They can search it very quickly.

At first, the problem statement may look absolutely enormous. ... What we're seeing, though, is that organizations that went through this shift from reactive to proactive two to three years ago have actually generated a new asset within the organization. ... They ultimately end up with a brand-new repository in the organization that can help them make better business decisions, leveraging the majority of the content that the organization creates.

If you logically think through the process, as an organization, you are taking a more proactive stance. You're capturing all of those emails, you're capturing content from file systems and your [Microsoft] SharePoint systems. You're pulling sales orders. You get purchase request from your database environment. You're consolidating maybe miles and miles of paper into a digital form and bringing all of this content into one compliance archive.

This information is in one place. If you're able to add better classification of the content, a better way of a layer of meaning to the content, suddenly you have a tool in place that allows you to analyze the information in the organization, model information in the organization, and use this information to make better business decisions.

The final step, once you've got all that content in one place, is to add a layer of analytic or modeling capability to allow you to manipulate that content and respond quickly to a subpoena or an audit request.

Jones: We're working right now on putting an evidence repository in place, so that

So we're seeing either top-down or grassroots-up content moving into the cloud. From a regulatory perspective, a governance perspective, or a legal perspective, this has new implications for the organizations.

we can collect information that's been identified, bring it over, and then search on it. So, you can do early electronic searches, do some of the de-duping that Jonathan has talked about, and get some early case assessment.

Our counsel can find out quickly what kind of emails we've got and get a sense of what the case is worth long before we have to collect it and turn it over to our outside vendors for processing. That's where we're moving at this point.

We think it's going to have tremendous benefit for us in terms of getting on top of our litigation early on, reducing the cost of the data that we end up sending outside for processing, and of course, saving cost across the board, because we can do so much of it through our own management systems, when they're in place. We're really anxious and excited to see how this is going to help us in our overall litigation strategy and in our cost.

Martin: Increasingly, we're seeing more and more content move into the cloud. This is may be coming from a top-down initiative, or from a cost or capability perspective. Organizations are saying, "Maybe it's no longer cost effective for us to run an email environment internally. What we'd like to do is put that into the cloud, be able to manage email in the cloud, or have our email managed in the cloud.”

Or, it may come from the grassroots, bottom up, where employees, when they come to work, are beginning to act more and more like consumers. They bring consumer-type technology with them, something like Facebook or social networking sites. They're coming to the organization to set up a project team and to set up a Facebook community, and they collaborate using that.

So we're seeing either top-down or grassroots-up content moving into the cloud. From a regulatory perspective, a governance perspective, or a legal perspective, this has new implications for the organizations. A lot of organizations are struggling a little bit on how do they respond to that.

... How do you discover this content, how are you required to capture this content, or are they the same, legal obligations, the content that's inside your data center of this various IT data centers? How do you address applications, maybe mashups, where content may be spread across 20 to 30 different data centers. It's a whole new vista of issues that are beginning to appear as content moves into the cloud.

Jones: The courts haven't yet addressed the cloud era, but it's going to definitely be one for which we're going to have to have a plan in place. The sooner you start being aware of it, asking the questions, and developing a strategy, the better. Once again, you're not being reactive and, hopefully, you're saving money in the process.

Martin: Probably one of the best ways to learn is from the experience of others. We've invested quite heavily over the last year in building a community for the uses of our products, as well as the potential use of our products, to share best practices and ideas around this concept of information and governance that we've been talking about today, as well as just broader information management issues.

There is a website, www.hp.com/go/imhub. If you go there, you'll see lots of information from former users about how they're using their technology.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download or view the transcript. Learn more. Sponsor: Hewlett Packard.

Thursday, August 6, 2009

Letterman's job remains safe as HP goes to top 10 list hijinks on server virtualization management

Is server virtualization sprawl a laughing matter? Do the pains of IT platform architects and administrators matter so little that world's largest technology company by revenue can poke fun at their daily challenges?

Apparently so. Taking a page from late-night comedians -- and the expected viral repurposing effects of such blogs like Huffington Post -- HP's virtualization marketers have swapped speeds-and-feeds brochures for self-deprecating cheap shots at corporate polyester ties.

It's all in the name of educating the IT community on virtualization best practices, and for the most part it works. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.] Not as well as my podcasts, mind you, but it works.

Posted on YouTube, HP with its "HPEN Top Ten," clip has spoofed the satirists. Usually Top Ten lists apply to areas of politics or entertainment -- but, honestly, most of the IT departments I've visited have plenty of both. So it's actually quite appropriate after all.



The next thing you know chubby, white, middle-aged bald guys will be stereotyped as IT industry analysts.

Starring Shay Mowlem, Strategic Marketing Lead, HP Software & Solutions, (as the sidekick musician), the video also features Mark Leake, Director, Portfolio & Executive Content, HP Software Products.

And what makes our dynamic duo fun and interesting to watch? None other than the "Top Ten Reasons that Customers Need HP Virtualization Management Solutions" ...

10) They have no idea how many virtual machines (VMs) they have

9) They have more management tools than staff

8) It takes 2 minutes to provision a VM and 2 weeks to provision its storage

7) They're drowning in new platforms and technologies

6) Virtual machines are up; user satisfaction is down

5) They are experiencing backup traffic jams in their network

4) Their VMs have gone prime time

3) They can't tell which business service their VM is supporting

2) Their auditors are starting to ask questions

1) Because VMware, Citrix, Microsoft and all the other partners say so

The video is better than the read, I have to say, but only virtually so. Check it out. And let me know, honestly, wouldn't you prefer the speeds-and-feeds brochures again?

Monday, August 3, 2009

WebLayers sets sights on cloud governance with updated WebLayers Center 5.0

For all the buzz about cloud computing, there remains a key challenge for companies: regulatory compliance and governance issues.

Left unaddressed, these issues could derail the long-term growth of cloud adoption. That's why more companies are coming to market with ever-evolving solutions that aim to take the pain out of controlling the cloud.

Now, Cambridge, Mass.-based WebLayers just updated its flagship automated governance software, WebLayers Center 5.0, with new features that aim to mitigate the risk and cut the costs of developing in the cloud.

From extended support for Eclipse-based integrated development environment (IDE) to a deeper policy library and from "what if" scenarios to tighter integration with IBM and HP governance software, WebLayers Center 5.0 offers a solution worth exploring for policy enforcement across service-oriented architecture (SOA) and related IT architectures.

Consistent, Measurable, Auditible

John Favazza, vice president of engineering at WebLayers, said governance is transforming from an option to a necessity. It's a necessity because governance can proactively identify and address policy violations in the software development life cycle (SDCL) before they make a negative impact on business operations.

"This necessity is due to the realization that the cost of fixing software code after it’s been deployed can be 50 to 200 times higher than if the issues were addressed as the code was being written by the software developer," Favazza says. "WebLayers Center 5.0 mitigates these risks and reduces unnecessary development costs resulting in greater cost efficiencies.”

WebLayers' approach centralizes policy management and distributes policy enforcement to support automated governance at every point in the infrastructure and across all platforms throughout the SDLC. As WebLayers sees it, distributed governance is a key to breeding a distributed-centric environment and intelligent automation via rules-based logic are a key to reducing errors while meeting business goals. I might call it federated governance, but the point is the same and will be critical to master going forward.

WebLayers 5.0 in Action

WebLayers 5.0's intelligent automated governors work to pinpoint all of the artifacts throughout the infrastructure that are related to a low security score and let the software developer and the SOA architect know specifically where code violates development policies or business rules.

WebLayers Center then picks up where the automated governors leave off, guiding the developer on

"This necessity is due to the realization that the cost of fixing software code after it’s been deployed can be 50 to 200 times higher than if the issues were addressed as the code was being written by the software developer."

the path to address issues no matter where they occur or in what phase they appear in the SDLC. WebLayers Center includes an auto correct feature to correct the violations so developers don't have to review entire applications. It's easy to see how this capability would reduce overall software development errors, lessen learning curves and even accelerate the return on SOA investments.

On the policy distribution front, WebLayers new node director captures a snapshot of each governance point within the enterprise so the manager can distribute recommended policies to any governed system – including each developer's desktop – for knowledge sharing and time savings across the SLDC.

While the debate on the best way to achieve cloud governance continues, the progress toward automated identification and correction and stronger distribution capabilities is a step in the right direction. With a growing list of competitors seeking to solve a real cloud computing challenge, we should see plenty of innovation in this space in the years ahead.

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached here and here.

Wednesday, July 29, 2009

Is your SOA hammer looking for a nail?

This guest post comes courtesy of ZapThink. Jason Bloomberg is managing partner at ZapThink. You can reach him here.

By Jason Bloomberg

It sounds so obvious when you get right down to it: you need to know what problem you're solving before you can solve it. Common sense tells you to start with the problem before you can find the solution to the problem. If you start with a solution without knowing what the problem is, then there's no telling if the solution you have will be the right one for the problems you're facing.

Obvious, yes, but it never ceases to amaze us at ZapThink that when it comes to service-oriented architecture (SOA) projects, time and again we run into technology teams who don't have a grasp as to what business problems they're looking to solve. Now, it might be tempting to dismiss this disconnect to "techies being techies" or some other superficial explanation, but the problem is so pervasive and so common that there must be more to it. As a result, we took a closer look at why so many SOA projects have unclear business drivers. What we found is that the underlying issue has little to do with SOA, and everything to do with the way large businesses are run.

The wrong question

This story begins with the SOA architect. Architects frequently call in ZapThink when they're stuck on some part of their SOA initiative; we're SOA fire fighters, so to speak. Frequently, the first question we get when we sit down with the architecture team is "how do I sell SOA to the business?" Well, if that's the first question they ask, they're already on the wrong foot. That's the wrong question! The correct question is "how do we best solve the business problems we're facing?"

SOA is not a panacea, after all; it only helps organizations solve certain problems typically having to do with business change in the face of IT heterogeneity. It's best to solve other problems with other approaches as appropriate, a classic example of the right tool for the job.

For this reason, ZapThink considers the SOA business case as an essential SOA artifact. Architects must have a clear picture of the business motivations for SOA, not only at the beginning of the initiative, but also as the architecture rolls out. After all, business issues evolve over time, partly because of the changing nature of business, but also because properly iterative SOA efforts solve problems during the course of the overall initiative.

Even when architects ask the right question, however, there is still frequently a disconnect between the business problems and the SOA approach. The challenge here is that the architects -- or more broadly, the entire SOA team -- are only one part of the bigger picture, especially in large organizations. In the enterprise context, how the business asks for IT capabilities in the broad sense is often at the root of the issue.

SOA Business Driver Pitfalls

Here are some real-world examples of how we've seen the issue of unclear business drivers for SOA play out in various enterprises. We've generalized a few of the details to protect the innocent (and the guilty!).
The SOA Mandate at the US Department of Defense (DoD) -- In aggregate, the DoD is perhaps the largest implementer of SOA in the world, in large part because they have an organization-wide SOA mandate. It's true they have high-level drivers for this mandate, including increased business agility and a more cost-effective approach to siloed IT resources. But those general drivers don't help much when a defense contractor must create a specific implementation.

This particular DoD project involved a contractor who delivered a perfectly functional SOA implementation to their client. The client, however, found it to be entirely unsatisfactory, and the entire multi-million dollar effort was canceled, the money down the tubes. What happened? The problem was a disconnect between high level business drivers like business agility and specific business problems the agency in question wanted to solve. The fact the client wanted to "do SOA," so the contractor "delivered SOA" to the client only made the situation worse.

The most important take-away from this fiasco is that SOA wasn't the problem. From all the information we gathered, the contractor properly designed and implemented the architecture, that is, followed the technical best practices that constitute the practice of SOA. In essence they built a perfectly functioning tool for the wrong job. Fundamentally, the client should have specified the problems they were looking to solve, instead of specifying SOA as their requirement.

SOA for the Enterprise Morass -- ZapThink recently received a SOA requirements document from a financial services client who asked us to review it and provide our recommendations. This document contained several pages of technical requirements, but when we got to the business requirements section, it was simply marked "to be determined." Naturally, we pointed out that among the risks this project faced was the lack of a clear business case.

Our recommendation was to flesh out the business case by discussing business drivers for the SOA initiative with business stakeholders. In response, the client told us that it wasn't feasible to speak with the business side. The entire SOA initiative focused on the IT organization, and the business wasn't directly involved.

We pressed them on this point, explaining how critical having clear business drivers is for the success of the SOA initiative. Why is communicating with the business such an issue for them anyway? Were they afraid of such interactions? Did they not know who the stakeholders were? Or perhaps there was no business motivation for the project at all?

The problem, as it turned out, was more subtle. They described the challenge they faced as the "enterprise morass." As happens in so many very large organizations, there are no clear communication patterns connecting business and IT. Yes, there are business stakeholders, and yes, business requirements drive IT projects in a broad sense, but there are so many players on both sides of the business/IT equation that associating individual business sponsors with specific IT projects is a complex, politically charged challenge. As a result, the SOA team can only look to the management hierarchy within IT for direction, under the assumption that at some executive level, IT speaks to the business in order to get specific drivers for various IT initiatives, including the SOA effort. Speaking directly to business sponsors, however, is off limits.

The SOA Capability Conundrum -- The chief SOA architect at this European firm was in one of our recent Licensed ZapThink Architect Bootcamps, and when we got to the Business Case exercise, he explained that in their organization, the motivation for SOA was to build out the SOA capability. His argument was as follows: if we implement SOA, deploying a range of Services that can meet a variety of business needs in the context of a flexible infrastructure that supports loose coupling and reusability, then we'll be well-positioned for any requirements the business might throw at us in the future.

The reasoning behind this argument makes sense, at least on a superficial level. Let's build several Business Services that represent a broad range of high-value IT capabilities in such a way that when the business comes to us with requirements, we're bound to be able to meet those requirements with the Service capabilities we've deployed ahead of time. The business is bound to be ecstatic that we planned ahead like this, anticipating their needs before they gave us specific requirements!

While this organization might very well get lucky and find that they built cost-effective capabilities that the business will need, taking this "fire-ready-aim" approach to SOA dramatically increases the risks of the initiative. After all, it's difficult enough to build reusable Services when you have clear initial requirements for those Services. Building such Services in the absence of any specific requirements is just asking for trouble.
The ZapThink Take

If you take a close look at the three scenarios above, you'll notice that the stories don't really have to be about SOA at all. You could take SOA out of the equation and replace it with any other IT effort aimed at tackling enterprise-level issues and you might still have the same pitfalls. Master data management, customer relationship management, or business process management are all examples of cross-organizational initiatives that might succumb to the same sorts of disconnects between business and IT.

At the root of all of these issues is the dreaded phrase "business/IT alignment." It seems that the larger the organization, the more difficult it is to align IT capabilities with business drivers. Sometimes the problem is that the business asks for a particular solution without understanding the problem (like the DoD's SOA mandate), or perhaps a combination of politics and communication issues interfere with business/IT alignment (the enterprise morass), or in other cases IT jumps the gun and delivers what it thinks the business will want (the capability conundrum). In none of these instances is the problem specific to SOA.

SOA, however, can potentially be part of the solution. As ZapThink has written about before, the larger trend of which SOA is a part is a movement toward formalizing the relationship between business and IT with flexible abstractions, including Business Services, Cloud capabilities, and more. If you confuse this broader trend with some combination of technologies, however, you're falling for the straw man that gave rise to the SOA is dead meme. On the other hand, if you take an architectural approach to aligning business drivers with IT capabilities that moves toward this new world of formalized abstraction, then you are taking your first steps on the long road to true, enterprise-level business/IT alignment.

This guest post comes courtesy of ZapThink. Jason Bloomberg is managing partner at ZapThink. You can reach him here.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.