Friday, August 14, 2009

HP partners with iTKO on LISA services testing suite for SOA, BPM

Take the BriefingsDirect middleware/ESB survey now.

When HP inks a deal to resell your testing software, you know you must be doing something right.

HP is reselling iTKO’s LISA Virtualize product, a suite of test, validation and virtualization solutions optimized for distributed, multi-tier applications that leverage SOA, BPM, cloud computing, integration suites and ESBs. HP’s aim is to help customers reduce testing costs and speed the time to market for modern applications. [Disclosure: HP and iTKO are sponsors of BriefingsDirect podcasts.]

How does LISA help HP’s Quality and Performance Management solutions suite? By eliminating common system infrastructure dependencies during application testing. The idea is to trim both the cost and risk of modern Quality Assurance – a major issue for today’s enterprise.

Here’s how it works: LISA Virtualize does away with system dependency constraints by simulating the dynamic behavior and performance conditions of downstream system dependencies. In other words, you can see how systems react and respond as if they were running live – but they aren’t running live. That saves time and money.

Jonathan Rende, vice president and general manager of the Business Technology Optimization Applications, Software and Solutions group at HP, said: "Customers can reduce costs and speed up their ability to respond to business needs by modernizing their applications.”

By bringing together HP Quality Center and HP Performance Center solutions with iTKO's LISA Virtualize software, Rende said customers can remove delay-causing system dependencies during testing processes. The result: saving time and lowering the cost of delivering complex applications.

To be sure, putting quality top of mind earlier in the development process is a key to reducing defects and speeding time to market. And Shridhar Mittal, iTKO's CEO, claims the company’s virtualization capabilities lower test lab costs by up to 65 percent and shortening software release cycles by up to 38 percent.

If those claims hold true, it’s easy to see why HP is partnering with this young company. The running theme with this announcement is saving time and money – both critical selling points in a down economy.

Take the BriefingsDirect middleware/ESB survey now.

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached here and here.

Thursday, August 13, 2009

Got middleware? Got ESBs? Take this survey, please.

Take the brief online survey.

I keep hearing about how powerful social media is for gathering insights from the IT communities and users. Yet I rarely see actual market research conducted via the social media milieu.

So now's the time to fully test the process. I'm hoping that you users and specifiers of enterprise software middleware, SOA infrastructure, integration middleware, and enterprise service buses (ESBs) will take 5 minutes and fill out my BriefingsDirect survey. We'll share the results via this blog in a few weeks.

We're seeking to uncover the latest trends in actual usage and perceptions around these technologies -- both open source and commercial.

How middleware products -- like ESBs -- are used is not supposed to change rapidly. Enterprises typically choose and deploy integration software infrastructure slowly and deliberately, and they don't often change course without good reason.

But the last few years have proven an exception. Middleware products and brands have shifted more rapidly than ever before. Vendors have consolidated, product lines have merged. Users have had to grapple with new and dynamic requirements.

Open source offerings have swiftly matured, and in many cases advanced capabilities beyond the commercial space. Interest in SOA is now shared with anticipation of cloud computing approaches and needs.

So how do enterprise IT leaders and planners view the middleware and SOA landscape after a period of adjustment -- including the roughest global recession in more than 60 years?

This brief survey, distributed by BriefingsDirect for Interarbor Solutions, is designed to gauge the latest perceptions and patterns of use and updated requirements for middleware products and capabilities. Please take a few moments and share your preferences on enterprise middleware software. Thank you.

Take the brief online survey.

Wednesday, August 12, 2009

Cloud Security Panel: Is cloud computing more or less secure than on-premises IT?

Listen to the podcast. Find it on iTunes/iPod and Download or view the transcript. Sponsor: The Open Group.

Welcome to a special sponsored podcast discussion coming from The Open Group’s 23rd Enterprise Architecture Practitioners Conference in Toronto. This podcast, part of a series from the July 2009 event, centers on cloud computing security.

Much of the cloud security debate revolves around perceptions. ... For some cloud security is seeing the risk glass as half-full or half empty. Yet security in general takes on a different emphasis as services are mixed and matched from a variety of internal and external sources.

So will applying conventional security approaches and best practices be enough for low-risk, high-reward, cloud computing adoption? Most importantly, how do companies know when they are prepared to begin adopting cloud practices without undo security risks?

Here to help better understand the perils and promises of adopting cloud approaches securely, we welcome our panel: Glenn Brunette, distinguished engineer and chief security architect at Sun Microsystems and founding member of the Cloud Security Alliance (CSA); Doug Howard, chief strategy officer of Perimeter eSecurity and president of USA.NET; Chris Hoff, technical adviser at CSA and director of Cloud and Virtualization Solutions at Cisco Systems; Dr. Richard Reiner, CEO of Enomaly; and Tim Grance, program manager for cyber and network security at the National Institute of Standards and Technology (NIST).

The discussion is moderated by me, BriefingsDirect's Dana Gardner.

Here are some excerpts:
Reiner: There are security concerns to cloud computing. Relative to the security concerns in the ideal enterprise mode of operation, there is some good systematic risk analysis to model the threats that might impinge upon this particular application and the data it processes, and then to assess the suitability of different environments for potential deployment of that stuff.

There are a lot more question marks around today's generation of public-cloud services, generally speaking, than there are around the internal computing platforms that enterprises can use. So it's easier to answer those questions. It's not to say the answers are necessarily better or different, but the questions are easier to answer with respect to the internal systems, just because there are more decades of operating experience, there is more established audit practice, and there is a pretty good sense of what's going to be acceptable in one regulatory framework or another.

Howard: The first thing that you need to know is, "Am I going to be able to deliver a service the same way I deliver it today at minimum? Is the user experience going to be, at minimum, the same that I am delivering today?"

Because if I can't deliver, and it's a degradation of where my starting point is, then that will be a negative experience for the customers. Then, the next question is, obviously, is it secured as a business continuity? Are all those things and where that actual application resides completely transparent to the end user?

Brunette: Is cloud computing more or less secure than client-server? I don't think so. I don't think it is either more or less secured. Ultimately, it comes down to the applications you want to run and the severity or criticality of these applications, whether you want to expose them in a shared virtualized infrastructure.

... When you start looking at the cloud usage patterns and the different models, you're going to see that governance does not end at your organization's border. You're going to need to understand the policies, the processes, and the governance model of the cloud providers.

It's going to be important that we have a degree of transparency and compliance out in the cloud in a way that can be easily consumed and integrated back into an organization.

Hoff: One of the interesting notions of how cloud computing alters the business case and use models really comes down to a lot of pressure combined with the economics today. Somebody, a CIO or a CEO, goes home and is able to fire up their Web browser, connect to a service we all know and love, get their email, enjoy a robust Internet experience that is pretty much seamless, and just works.

Then, they show up on Monday morning and they get the traditional, "That particular component is down. That doesn't work. This is intrusive. I've got 47,000 security controls that I don't understand. You keep asking for more money."

Grance: Cloud has a vast potential to cause a disintermediation, just like in power and other kinds of industries. I think it may run eventually through some of these consulting companies, because you won't be able to get as rich off of consulting for that.

In the meantime, I think you're going to have ... people simply just roll their own [security]. Here's my magic set of controls. It may not be all of them. It may just be a few of them. I think people will shop around for those answers, but I think the marketplace will punish them.

Howard: ... If you look at a lot of the cloud providers, we tend, in many cases, to fight some standards, because, in reality, we want to have competitive differentiators in the marketplace. Sometimes, standards and interoperability are key ones, sometimes standards create a lack of our ability to differentiate ourselves in the marketplace.

However, on the security side, I that's one of the key areas that you definitely can get the cloud providers behind, because, if we have 10,000 clients, the last thing we want is to have enough people sitting around taking the individual request of all the audits that are coming in from those customers.

... So, to put standards behind those types of efforts is an absolute requirement in the industry to make it scalable, not just beyond the infrastructure, performance, availability, and all those things, but actually from a cost perspective of people supporting and delivering these services in the marketplace.

Brunette: ... One of the other things I'd point out is that, it's not just about the cloud providers and the cloud consumers, but there are also other opportunities for other vendors to get into the fray here.

One of the things that I've been a strong proponent of is, for example, OS vendors producing better, more secured, hardened versions of their operating systems that can be deployed and that are measurable against some standard, whether a benchmark from the Center for Internet Security, or FDCC in the commercial or in the federal space.

You may also have the opportunity of third parties to develop security-hardened stacks. So, you'd be able to have a LAMP stack, a Drupal stack, an Oracle stack, or whatever you might want to deploy, which has been really vetted by the vendor for supportability, security, performance, and all of these things. Then, everyone benefits, because you don't all have to go out there and develop your own.

Howard: ... At the end of the day, if you develop and you deliver a service ... and the user experience is positive, they're going to stay with the service.

On the flip side, if somebody tries to go the cheap way and ultimately delivers a service that has not got that high availability, has got problems, is not secure, and they have breaches, and they have outages, eventually that company is going to go out of business. Therefore, it's your task right now to figure out who are the real players, and does it matter if it's an Oracle database, SQL database, or MySQL database underneath, as long as it's meeting the performance requirements that you have.

Unfortunately, right now, because everything is relatively new, you will have to ask all the questions and be comfortable that those answers are going to deliver the quality of service that you want. Over time, on the flip side, it will play out and the real players will be the real players at the end of the day.

Hoff: ... It [also] depends on what you pay for it, and I think that's a very interesting demarcation point. There is a service provider today who doesn’t charge me anything for getting things like mail and uploading my documents, and they have a favorite tag line, “Hey, it’s always in beta.” So the changes that you might get could be that the service is no longer available. Even with enterprise versions of them, what you expect could also change.

... In the construct of SaaS, can that provider do a better job than you can, Mr. Enterprise, in running that particular application?

This comes down to an issue of scale. More specifically, what I mean by that is, if you take a typical large enterprise with thousands of applications, which they have to defend, safeguard, and govern, and you compare them to a provider that manages what, in essence, equates to one application, comparing apples to elephants is a pretty unreasonable thing, but it’s done daily.

What’s funny about that is that, if you take a one-to-one comparison with that enterprise that is just running that one application with the supporting infrastructure, my argument would be that you may be able to get just as good as, perhaps even better, performance than the SaaS provider. It’s when you get to the point of where you define scale, it's on the consumer side or number of apps you provide where that question gets interesting.

... What happens then when I end up having 50 or 60 cloud providers, each running a specific instance of these applications. Now, I've squeezed the balloon. Instead of managing my infrastructure, I'm managing a bunch of other guys who I hope are doing a good job managing theirs. We are transferring responsibility, but not accountability, and they are two very different things.

Brunette: ... In almost every case, the cloud providers can hide all of that complexity, but it gives them a lot more flexibility in terms of which technology is right for their underlying application. But, I do believe that over time they will have a very strong value proposition. It will be more on the services that they expose and provide than the underlying technology.

Hoff: ... The reality is, portability and interoperability are going to be really nailed to firstly define workload, express the security requirements attached to that workload, and then be able to have providers attest in the long-term in a marketplace.

I think we called it "the Intercloud," a way where you go through service brokers or do direct interchange with this type of standards and protocols to say, “Look I need this stuff. Can you supply these resources that meet these requirements? “No? Well, then I go somewhere else.”

Some of that is autonomic, some of it’s automated, and some of it will be manual. But, that's all predicated, in my opinion, upon building standards that lets us exchange that information between parties.

Reiner: I don't think anyone would disagree that learning how to apply audit standards to the cloud environment is something that takes time and will happen over time. We probably are not in a situation where we need yet another audit standard. What we need is a community of audit practices to evolve and to mature to the point where there is a good consensus of opinion about what constitutes an appropriate control in a cloud environment.

Brunette: As Chris said, it comes down to open standards. It's important that you are able to get your data out of a cloud provider. It's just as important that you need to have a standard representation of that data, something that can be read by your own applications, if you want to bring it back in house, and something that you can use with another provider, if you decide go that route.

Grance: I'm going to out on a limb and say that NIST is in favor of open, voluntary consensus, but data representation and APIs are early places where people can start. I do want to say important things about open standards. I want to be cautious about how much we specify too early, because there is a real ability to over specify early and do things really badly.

So it's finding that magic spot, but I think it begins with data representation and APIs. Some of these areas will start with best practices and then evolve into these things, but again the marketplace will ultimately speak to this. We convey our requirements in clear and pristine fashion, but put the procurement forces behind that, and you will begin to get the standards that you need.
Listen to the podcast. Find it on iTunes/iPod and Download or view the transcript. Sponsor: The Open Group.

Cloud computing proves a natural for offloading time-consuming test and development processes

Listen to the podcast. Find it on iTunes/iPod and Download or view the transcript. Learn more. Sponsor: Electric Cloud.

Our latest podcast discussion centers on using cloud computing technologies and models to improve the test and development stages of applications' creation and refinement. One area of cloud computing that has really taken off and generated a lot of interest is the development test and performance proofing of applications -- all from an elastic cloud services fabric.

The build and test basis of development have traditionally proven complex, expensive, and inefficient. Periodic bursts of demand on runtime and build resources are the norm. By using a cloud approach, the demand burst can be accommodated better through dynamic resources, pooling, and provisioning.

We've seen this done internally for development projects and now we're starting to see it applied increasingly to external cloud resource providers like Amazon Web Services. And Microsoft is getting into the act too.

To help explain the benefits of cloud models for development services and how to begin experimenting and leveraging external and internal clouds -- perhaps in combination -- for test resource demand and efficiency, I recently interviewed Martin Van Ryswyk, vice president of engineering at Electric Cloud, and Mike Maciag, CEO at Electric Cloud.

Here are some excerpts:
Van Ryswyk: Folks have always wanted their builds to be fast and organized and to be done with as little hardware as possible. We've always struggled to get enough resources applied to the build process.

One of the big changes is that folks like Amazon have come along and really made this accessible to a much wider set of build teams. The dev and test problem really lends itself to what's been provided by these new cloud players.

Maciag: The traditional approaches of the overnight build, or even to the point of what people refer to as continuous integration, have fallen short, because they find problems too late. The best world is where engineers or developers find problems before they even check in their code and go to a preflight model, where they can run builds and tests on production class systems before checking in code in the source code control system.

Van Ryswyk: At a certain point, you just want it to happen like a factory. You want to be able to have builds run automatically. That's what ElectricCommander does. It orchestrates that whole process, tying in all the different tools, the software configuration management (SCM) tools, defect tracking tools, reporting tools, and artifact management -- all of that -- to make it happen automatically.

And that's really where the cloud part comes in. ... Then, you're bringing it all back together for a cohesive end report, which says, "Yes, the build worked." ElectricCommander was already allowing customers to manage the heterogeneity on physical machines and virtual machines (VMs). With some integrations we've added you can now extend that into the cloud.

There will be times when you need a physical machine, there will be times when your virtual environment is right, and there will be times when the cloud environment is right. ... We may not want to put our source code out in the cloud but we can use 500 machines for few hours to do some load, performance, or user interface testing. That's a perfect model for us.

... When you have these short duration storms of activity that sometimes require hundreds and hundreds of computers to do the kind of testing you want to do, you can rent it, and just use what you need. Then, as soon as you're done with your test storm, it goes away and you're back to the baseline of what you use on average.
Listen to the podcast. Find it on iTunes/iPod and Download or view the transcript. Learn more. Sponsor: Electric Cloud.

VMware fleshes out its cloud computing support model with SpringSource grab

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

VMware’s proposed $362 million acquisition of SpringSource is all about getting serious in competing with and Google App Engine as the Platform-as-a-Service (PaaS) cloud with the technology that everybody already uses.

This acquisition was a means to an end, pairing two companies that could not be less alike. VMware is a household name, sells software through traditional commercial licenses, and markets to IT operations. SpringSource is a grassroots, open source developer-oriented firm whose business is a cottage industry by comparison. The cloud brought both companies together that each faced complementary limitations on their growth. VMware needed to grow out beyond its hardware virtualization niche if it was to regain its groove, while SpringSource needed to grow up and find deeper pockets to become anything more than a popular niche player.

The fact is that providing a virtualization engine, even if you pad it with management utilities that act like an operating system, is still a raw cloud with little pull unless you go higher up in the stack. Raw clouds have their appeal only to vendors that resell capacity or enterprise large firms with the deep benches of infrastructure expertise to run their own virtual environments. For the rest of us, we need a player that provides a deployment environment, handles the plumbing, that is married to a development environment. That is what Salesforce’s and Google’s App Engine are all about. VMware’s gambit is in a way very similar to Microsoft’s Software + Services strategy: use the software and platforms that you are already used to, rather than some new

The most glaring omission is need for Java object distributed caching to provide yet another alternative to scalability.

environment in a cloud setting. There’s nothing less familiar to large IT environments than VMware’s ESX virtualization engine, and in the Java community, there’s nothing more familiar than the Spring framework which – according to the company – accounts for roughly half of all Java installations.

With roughly $60 million in stock options for SpringSource’s 150-person staff, VMware is intent on keeping the people as it knows nothing about the Java virtualization business. Normally, we’d question a deal like this because the company’s are so dissimilar. But the fact that they are complementary pieces to a PaaS offering gives the combination stickiness.

For instance, VMware’s vSphere’s cloud management environment (in a fit of bravado, VMware calls it a cloud OS) can understand resource consumption of VM containers; with SpringSource, it gets to peer inside the black box and understand why those containers are hogging resource. That provides more flexibility and smarts for optimizing virtualization strategies, and can help cloud customers answer the question: do we need to spin out more VMs, perform some load balancing, or re-apportion all those Spring TC (Tomcat) servlet containers?

The addition of SpringSource also complements VMware’s cloud portfolio in other ways. In his blog about the deal, SpringSource CEO Rod Johnson noted that the idea of pairing VMware’s Lab Manager (that’s the test lab automation piece that VMware picked up through the Akimbi acquisition) proved highly popular with Spring framework customers. In actuality, if you extend Lab manager from simply spinning out images of testbeds to spinning out runtime containers, you would have VMware’s answer to IBM’s recently-introduced WebSphere Cloudburst appliance.

VMware isn’t finished however. The most glaring omission is need for Java object distributed caching to provide yet another alternative to scalability. If you only rely on spinning out more VMs, you get a highly rigid one-dimensional cloud that will not provide the economies of scale and flexibility that clouds are supposed to provide. So we wouldn’t be surprised if GigaSpaces or Terracotta might be next in VMware’s acquisition plans.

This guest post comes courtesy of Tony Baer’s OnStrategies blog . Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

Monday, August 10, 2009

BriefingsDirect analysts debate the 'imminent death' of enterprise IT as cloud models ascend

Download or view the transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at

Welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 43. Our topic centers on the pending purported death of corporate IT. You may recall that in the early 1990s, IT pundits glibly also predicted that the plug would be pulled on the last mainframe in 1996. It didn't happen.

The mainframe continues to support many significant portions of corporate IT functions. But these sentiments are newly rekindled and expanded these days through the mounting expectations that cloud computing and software-as-a-service (SaaS) will hasten the death of on-premises enterprise IT.

Some of the analyst reports these days indicate that hundreds of billions of dollars in IT spending will soon pass through the doors of corporate IT and into the arms of various cloud-service providers. We might conclude that IT is indeed about to expire.

Not all of us, however, subscribe to this extent in the pace of the demise of on-premises systems, their ongoing upkeep, maintenance, and support. To help us better understand the actual future role of IT on the actual floors inside of actual companies, we're joined by our guests and analysts this week: Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Brad Shimmin, principal analyst at Current Analysis; Ron Schmelzer, senior analyst, ZapThink; Sandy Rogers, former program director at IDC, and now independent IT analyst and consultant, and, as our guest this week, Alex Neihaus, vice president of marketing at Active Endpoints.

Here are some excerpts:
Kobielus: I can predict right now, based on my conversations with Forrester customers, and specifically my career in data warehousing and business intelligence (BI), that this notion of the death of IT is way too premature, along the lines of the famous Mark Twain quote.

... There aren't a substantial number of enterprises that have outsourced their data warehouse or their marts. [But] I think 2011 will see a substantial number of data warehouses deployed into the cloud.

The component of your data-warehousing environment that will be outsourced to public cloud, initially, in many cases, will not be your whole data warehouse. Rather it will be a staging layer, where you're staging a lot of data that's structured and unstructured and that you're pulling from both internal systems, blogs, RSS feeds, and the whole social networking world -- clickstream data and the like.

Baer: I just completed actually a similar study in application lifecycle management (ALM), and I did find that that cloud certainly is transforming the market. It's still at the very early stages, but ... two areas really stuck out. One is anything collaborative in nature, where you need to communicate -- especially as development teams go more global and more distributed -- ... [and] planning, budgeting, asset management, project portfolio management, and all those collaborative functions did very well [in the cloud].

Another side that did very well ... is anything that had very dynamic resource needs where today you need a lot of resource, tomorrow you don't. A good example of that is testing -- if you are a corporate IT department, which has periodic releases, where you have peaks and valleys in terms of when you need to test and do regression test.

[But] I got a lot of reluctance out there to do anything regarding coding in the cloud. ... So, in terms of IT being dead, well, at least with regard to cloud and on-premise, that's hardly the case in ALM.

Shimmin: Because I follow the collaboration area, I see [cloud adoption] happening much, much more quickly. ... Those are the functions that IT would love to get rid of. It's like a diseased appendix. I would just love to get rid of having to manage Exchange Servers. Any of us who have touched any of those beasts can attest to that.

So, even though I'm a recovering cynic and I kind of bounce between "the cloud is just all hype" and "yes, the cloud is going to be our savior," for some things like collaboration, where it already had a lot of acceptance, it's going to drive a lot of costs [out].

Schmelzer: It's really interesting. If you look at when most of the major IT shifts happen, it's almost always during period of economic recession. ... Companies are like, "I hate the systems I have. I'm trying to deal with inefficiency. There must be something wrong we're doing. Let's find some other way to do it." Then, we go ahead and find some new way to do it. Of course, it doesn't really solve all of our problems.

The cost-saving benefit of cloud is clearly there. That's part of the reason there is so much attention on it. People don't want to be investing their own money in their own infrastructure. They want to be leveraging economies of scale, and one of the great things that clouds do is provide that economy of scale. ... On the whole question of IT, the investments, and what's going to happen with corporate enterprise IT, I think we're going to see much bigger changes on the organizational side than the technological side. It’s hard for companies to get rid of stuff they have invested billions of dollars in.

... IT organizations will become a lot smaller. I don't really believe in 4,000-person IT organization, whose primary job is to keep the machines running. That's very industrial revolution, isn't it?

Rogers: I see enterprises all the time that are caught between a rock and a hard place, where they have specialized technologies that were built out in the client-server era. They haven't been able to find any replacements.

The ability to take a legacy system that may be very specialized, far reaching, have a lot of integrations and dependencies with two other systems, is a very difficult change. ... When we're talking about cloud and SaaS, it's going to impact different layers. ... We may want to think about leveraging other systems and infrastructure, more of the server, more of the data center layer, but there is going to be a huge number of implications as you move up the stack, especially in the middleware and integration space.

We're still at the very beginning stages of leveraging services and SOA, when you look at the mass market. ... There's a lot of work that needs to be done to just think about turning something off, turning something on, and thinking that you are going to be able to rely on it the same way that you've relied on the systems that have been developed internally. It's not to say it won't be done, but it certainly has a big learning curve that the whole industry will be engaging in.

Neihaus: What we find more interesting is not the question of whether the cloud will subsume IT, or IT will subsume the cloud, but who should be creating applications? ... There is a larger question today of whether end users can use these technologies to completely go around IT and create their own applications themselves.

For us, that seems to be the ultimate disingenuousness, the ultimate inability for all

For us, the cosmic question is whether we are really at the point where end users can take elements that exist in the cloud and their own data centers and create processes and applications that run their business themselves.

the reasons that everyone discussed. ... The question really is whether the combination of these technologies can be made to foster a new level of collaboration in enterprises where, frankly, IT isn't going to go away. The most rapid adoption of these technologies, we think, is in improving the way IT responds in new ways, and in more clever ways, with lot more end-user input, into creating and deploying applications.

For us, the cosmic question is whether we are really at the point where end users can take elements that exist in the cloud and their own data centers and create processes and applications that run their business themselves. And our response is that that's probably not the case, and it's probably not going to be the case anytime soon. If, in fact, it were the case, it would still be the wrong thing to do in enterprises, because I am not sure many CEOs want their business end users being IT.

Kobielus: You need strong governance to keep this massive cloud sandbox from just becoming absolute chaos.

So, it's the IT group, of course, doing what they do best, or what they prefer to be doing, which is architecture, planning, best practices, templates, governance control, oversight support, and the whole nine yards to make sure that, as you deal in new platforms for process and data, such as the cloud, those platforms are wrapped with strong governance.

Baer: You can't provide users the ability to mash-up assets and start creating reports without putting some sort of boundary around it.

This is process-related, which is basically instituting strong governance and having policies that say, "Okay, you can use these types of assets or data under these scenarios, and these roles can access this and share this."

Rogers: The sophistication of the solution interfaces and the management in the administrative capabilities to enable governance, are very nascent in the cloud offerings. That's an opportunity for vendors to approach this. There's an increasing need to compose and integrate silos within organizations. That has a huge implication on governance activities.

Gardner: I'd like to go around the table, on a scale of 1 to 10 where do you think we're going to see the IT department's role in three years -- with 1 being IT is dead, and 10 being IT is alive, robust, and growing vibrantly?

Kobielus: I'll be really wishy-washy and give it a 5. ... IT will be alive, kicking, robust, and migrating toward more of a pure planning, architecture, and best practices function.

Much of the actual guts of IT within an organization will migrate to hosted environments, and much of the development will be done by end users and power users. I think that's writing on the wall.

Baer: I am going to give it an 8. ... I don't see IT's role diminishing. There may be a lower headcount, but that can just as much be attributed to a new technology that provides certain capabilities to end users and also using some external services. But, that's independent of whether there's a role for IT, and I think it pretty much still has a role.

Shimmin: I'm giving it a 7 for similar reasons, I think that it's going to scale back in size little bit, but it's not going to diminish in value.
IT is not going to go away. I don't think IT is going to be suffering. IT is just a continuously changing thing. ... I think it's going to be very much alive, but the value is going to be more of a managerial role working with partners. Also, the role is

In some enterprises, IT is in deep trouble if they do not embrace new technologies and new opportunities and become an adviser to the business.

changing to be more of business analysts, if you will, working with their end users too. Those end users are both customers and developers, in some ways, rather than these guys just running around, rebooting Exchange servers to keep the green lights blinking.

Schmelzer: I think it's 10. IT is not going to go away. I don't think IT is going to be suffering. ... I guarantee that whatever it looks like, it will be still as important as an IT organization.

Rogers: Probably in the 7 to 8 range. ... In some enterprises, IT is in deep trouble if they do not embrace new technologies and new opportunities and become an adviser to the business. So it comes down to the transition of IT in understanding all the tools and capabilities that they have at their disposal to get accomplished what they need to.

Some enterprises will be in rough shape. The biggest changeover is the vendor community. They are in the midst of changing over from being technology purveyors to solution and service purveyors. That's where the big shift is going to happen in three years.

Neihaus: Our self-interest is in a thriving a segment of IT, because that's who we serve. So, I rate it as a 10 for all of the reasons that the much-more-distinguished-than-I panel has articulated. The role of IT is always changing and impacted by the technologies around it, but I don't think that that could be used as an argument that it's going to diminish its importance or its capabilities really inside organizations.

Gardner: Well, I'll go last and I'll of course cheat, because I'm going to break it into two questions. I think their importance will be as high or higher, so 8 to 10, but their budget, the percent of spend that they're able to derive from the total revenues of the organization, will be going down. The pressure will be on from a price and monetary budgeting perspective, so the role of IT will probably be down around 4.
Download or view the transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at