Wednesday, December 10, 2008

More than cost savings alone, cloud computing will transform business, say HP and Capgemini

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Read related white paper. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

Many enterprises and service providers are now grappling with how cloud models and economics will impact them. The specter of a challenging business climate may well hasten the need to seek IT resources that are supported through greater cloud computing approaches -- to save money, as well as to better reach global audiences and gain Web-scale efficiencies.

The goal is to take advantage of what cloud models offer, but to do so with low risk and in alignment with enterprise IT dictates and requirements around management, security, governance, and visibility. There are a host of innovations around the various cloud models that are now just emerging and that we're only beginning to discover. These amount to being able to do business in new ways by using cloud models to accomplish things that simply could not be done before.

To better understand the value and opportunity unfolding around cloud computing, I recently interviewed Andy Mulholland, global chief technology officer at Capgemini; Tim Hall, director of services-oriented architecture (SOA) products at HP Software and Solutions, and Russ Daniels, vice president and CTO of cloud services strategy at HP.

A new white paper, "Capgemini: The Cloud and SOA: Creating an Architecture for Today and for the Future," on some of these very same issues has been published by Capgemini. It is available free (registration required) via download here.

Furthermore, Capgamini's Mulholland, will be delivering several presentations on these challenges and opportunities at the HP Software Universe conference in Vienna this week at a time when Amazon's Web Services are quickly gaining traction in Europe.

Here are some excerpts from the podcast:
When we talk about the cloud ... it's a new model for constructing software. It's a new design pattern, and it allows you to solve problems that really have been out of reach. You can take business needs, which if you tried to address them in the context of traditional IT design and delivery models, would tend to fail or under deliver.

The cloud allows you to go after those problems, to open new markets for the business, to allow it to reach out to customers that it hasn't been able to get to, to improve its differentiation in the market, and to contribute to the real goals of the business itself. That's what we think is exciting about the cloud.

There is this premise that [cloud computing] can help me look at how I manage and reduce my cost. Perhaps more importantly, we should say it the other way around. It enables me to address how I deal with a more variable business pattern and pay for what I need when I need it.

Many of the things a business does today are relatively fixed. ... But what we have is a growing desire and a growing need to find new things in the front office, about how we run our business more effectively, how we get into markets more effectively, and how we trade better. These tend to be small, fast-moving projects. They make a very big difference, and we simply don't want the same time scale in provisioning for them.

Increasingly, probably over the next couple of years, people don't want to spend capital on them. They'll want to pay for them operationally. They represent a new market, a new technique, a new set of standards, and a new set of technologies. All of that comes together in where cloud is going to go and make the difference to businesses.

... You start to recognize there are already a number of very well known brands that sell through the Internet and combine their services. ... The challenge in this is how it moves from being something that a handful of Web-based businesses are using. How do more businesses learn how to exploit that market and take their share of commercial revenue from that market?

When we think about the cloud, we don't think it's just a matter of how infrastructure is packaged, but it's really a combination of the impact of service oriented architecture (SOA) starting to break apart applications. We think more about the services to separate out the data from the applications, so that you can get at the data without having to go through complex application integrations.

There's another piece around taking advantage of Web 2.0 innovations, which includes both how you can create rich user experiences in the context of browsers in these remote execution models, but also significantly it's the social dimension. How can you take advantage of the innovation that's occurred in the consumer space by understanding the importance of bringing people together?

In many companies, they're trying to exploit these things, but they are doing it with a complete lack of structure. By bringing in a cloud model successfully, you're actually introducing some structure to support the very activities that people are increasingly experimenting with in their businesses today.

... If you have been doing new stuff, and you are building new stuff inside the organization, you really ought to have started doing that around SOA. If you're using services correctly internally, then of course, you can cross the firewall and start to use services outside, and blend them together.

Folks are looking at this as an integration technology, instead of a complete transformation of how they deliver service orientation or business services more comprehensively and more flexibly to address some of the unique challenges that the business is facing. ... SOA adoption, as a transformational agenda, is a microcosm of some concepts that apply very specifically to cloud and preparing people for cloud adoption.

What we find with our customers is that many workloads are important to the business, but they are not mission critical. In many of these workloads, good-enough delivery is good enough. ... Distinguishing between those types of workloads, identifying those where good-enough delivery is appropriate, and moving those into virtualized and automated delivery models, positions you to take advantage of external infrastructure capabilities as appropriate.

The key challenge for any IT organization is to understand what the business really needs, where the business value is, and how technology can help deliver that. This question of business-IT alignment is always the heart of the problem, and it will be certainly be true in terms of how the business chooses to go after cloud-based opportunities.

We think the cloud is great for connecting. It's great for connecting business to business. It's great for connecting business to its customers. ... Where is connecting important to your business? That's ultimately a business question, not a technology question. The focus should be on having people who can map from what the business needs to understanding how to exploit this new expressiveness that the cloud brings to solve the most pressing challenges, or to exploit the most exciting opportunities that the business faces.

... [It comes down to] the difference between interactions, which is a lot of this new market, and transactions, which was the old IT market. When you look at any IT system, it's fundamentally about getting a safe transaction to record what you have done. But, if you think about someone trying to decide what they're going to buy from you, like buying an airline ticket, deciding which flight and how much money they're going to pay and which extras they're going to have, it's a lot of interactions.

... We don't think the cloud is great for “transactionality,” for deep, technical reasons. ... The place where the cloud is great is where you're not focused on supporting transactions, but interactions, where you are connecting. It's being able to take state from participants in an extended supply chain and propagate that information up through data feeds, up into a cloud service.

For example, that information might be related to the carbon footprint related to material flowing through an extended supply chain. Each of the participants in an extended supply chain can simply publish a data stream that captures the carbon footprint of the materials that they will be producing. Now, you can run analytics in the cloud, using search-like algorithms, to answer questions about the carbon footprint for some end products. You don't have to do the detailed process integrations. You don't have to provide detailed transactional integrations across the supply chain system to support it.

It's exactly that new expressiveness that allows us to go after problems that we really couldn't have done affordably in the past. Because we couldn't do them that way, we ended up doing things manually and in emergencies. If you think about product traceability, it's the same problem, very difficult to deal with from a technology integration perspective in the traditional ways. As a result, when there's a problem, we have people pawing through information spreadsheets manually and providing the answers too late to be helpful.

... The cloud allows you to deliver the business results that matter. In other words, it really has to be thought of in the context of IT’s technology for business, and the key business challenges that we see our customers facing today are how to develop new markets? How do they take advantage of the abilities they have and deliver them to new customers? How can they understand better what their customers need, and how can they fit in and connect with them?

The cloud provides great capabilities for that. We think that it's still early, and you can see the promise in things like the recommendation engines that you find at online shopping sites. You're searching for something, and, based on your buying history, your demographics, your search behaviors, and then comparing that to the behaviors of others, the site can provide you with suggestions about other things that might be of interest to you as well. The technology helps identify your intentions and then offers suggestions to help you find things better suited for your needs than what you could have expressed or identified yourself.

That's a wonderful opportunity, and to be able to expand that approach into more and more of the ways that a business connects has huge implications.

Relatively speaking, [cloud computing] is unstoppable. The question is whether you'll crash into it or migrate into it. Why is it unstoppable? Because we're watching a business shift, people have to find ways to compete better in the market. Much of that is around. "How do I add smart services? How do I make products more available? How do I communicate directly and intimately with people, so they know what they want to buy from me?" All of those things are already developing in many businesses today, and people are building solutions to do that, sometimes gracefully, and sometimes not at all gracefully.

In other words, just as we had with the PC, where we basically were driven into it, some companies got there in a very ungraceful way and had to figure out afterward how to sort out the mess. Others did have a strategy, and emerged in a very graceful way. I think we're in the same situation. Users wanting social software have taken us there to run and do things better. We've been taken there by businesses needing to get into new markets.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Read the related white paper. Sponsor: Hewlett-Packard.

Linthicum podcast: Cloud computing plus recession equals IT transformation

I had the pleasure of joining long-time IT thought leader and IT executive Dave Linthicum this week for a podcast on cloud computing.

We get into a discussion on the alignment of the dour economic climate with the newer services architectures for mixing and matching IT assets and resources with more flexibility. We also look into some recent news events relating to cloud and the impact and analysis on these issues.

Dave has created a new consulting emphasis on cloud computing, and is finding strong interest from enterprises in the subject and its effects. He also joins me regularly on the BriefingsDirect Analyst Insights podcast series.

If you're interested in understanding how cloud computing is affecting your company and IT department -- especially as a change agent during transformative times -- you may enjoy and benefit from the podcast. You can also subscribe to the ongoing Linthicum cloud podcast or find it on iTunes.

Tuesday, December 9, 2008

Remote support offers enterprises avenue to cut operational costs while improving IT systems reliability

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

The trend around use of remote support software for monitoring, remediation and IT maintenance automation is gaining steam in the global enterprise IT market. I certainly expect that as companies become even more cost conscious that they will seek to further reduce their total cost of IT operations in any way possible. Remote support best practices and effective use will therefore become even more prominent.

The goal of remote support software and services is to free up on-premises IT personnel to focus on what they do best and to offload routine chores to organizations that can leverage the Internet do IT support remotely at high efficiency and lower cost. The benefits of remote support have become very popular among PC owners, and now the value is becoming popular as a cloud computing service for general server support and data center maintenance worldwide.

To better understand the options for better remote monitoring, resolving, and automating of the ongoing performance support of IT systems, I recently interviewed Dionne Morgan, worldwide marketing manager in HP Technology Services, and Claudia Ulrich, communications manager in Delivery Engineering at HP.

Here are some excerpts:
At many companies, IT managers understand that ongoing administration and maintenance of their existing infrastructure consumes most of the IT budget. ... Far too much time has been spent by IT staff on managing, monitoring, and troubleshooting their IT infrastructure. Obviously, this can be very expensive in both time and money. Too often, there's increased risk and unplanned downtime, which lead to an inability to meet business objectives and achieve business outcomes.

We're also finding that system complexity is adding to the problem. ... When a problem occurs in the infrastructure, finding the source and the nature of the problem -- and then coming up with the resolution -- can also be a daunting task.

It could be anything from actual hardware failure and trying to detect exactly where within the system the failure has occurred, to a need for additional memory or additional hard drive space. Those are some of the typical problems that our customers are facing, and those are the problems where you can automate the process of identifying the nature of that problem and coming up with the solution.

[Enterprises are] moving from traditional phone-in [and help desk] support and on-site delivery to automated events reporting. This is also called "phone home" capabilities. Adding to customers' manageability solution the ability to monitor the complete enterprise environment by automatically submitting incidents to the remote support provider to increase the level of services, which in return improves availability and reduces service cost for the customer.

One reason this helps to manage personnel is because it's going to be constantly monitoring the environment 24/7. Even at the end of the day, when the staff goes home, the system is still monitoring and it helps to filter the actual events that are coming through, so that the IT organization can prioritize which of those events they need to take action on. It's actually removing some of the mundane task of troubleshooting and prioritizing the events or the incidents.

We're looking at the complete, heterogeneous IT environment. This includes servers, storage, network, not only from HP, but also from selected vendors like IBM and Dell servers, as well as Brocade and Cisco switches. ... This also includes industry trends toward virtualization, as well as blade, and cloud computing, as they evolve.

I believe that down the road we'll see an expansion of the products that are covered by remote support. We'll begin to look at the total environment, in addition to the infrastructure. We'll also see organizations looking at how to automate processes, how to help with monitoring and troubleshooting applications.

[For now], remote support is a critical piece of establishing the next-generation data center. HP has defined six enablers to build this next generation data center, and HP Remote Service Pack (RSP) can definitely contribute to these enablers. ... We're really looking at this one foundation to enable consolidation and modernization of data centers, and also to be able to transition between the two, using a common management system.

If you think about Information Technology Infrastructure Library (ITIL) and the fact that we have a lifecycle that includes strategy, design, transition, operation, and continued service improvement, this is going to help to automate many of those support processes that you need on an ongoing operational basis and incident management. They can assist with help desk management and asset management.

What we have found with customers is that, when they are using these remote support tools, they're actually able to reduce the amount of time they spend in troubleshooting by 20 percent and they're also able to increase the accuracy of the diagnosis by over 99 percent. So, with these remote support tools, if they're monitoring the heterogeneous environment that will actually speed up the process of troubleshooting and isolating the problem.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Wednesday, December 3, 2008

Active Endpoints beefs up visual SOA orchestration with added features and expanded OS support

Active Endpoints, the inventor of the visual orchestration system (VOS), has spruced up its flagship offering with enhancements that include operational improvements and expanded operating system and database support.

The Waltham, Mass.-based company has released ActiveVOS 6.0.2, which includes new reporting capabilities, an enhanced ability to reuse plain old Java objects (POJOs), and new platform and operating system support.

The new version of ActiveVOS also addresses the ongoing debate over whether service oriented architecture (SOA) modeling languages such as business process modeling notation (BPMN) can be directly executed by the business process management system (BPMS) or first serialized into an execution-focused language like business process execution language (BPEL). [Disclosure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]

According to Active Endpoint's Alex Neihaus, vice president of marketing:
We support the latter approach on the simple theory that models aren’t fully specified and therefore cannot be executed on real machines – on which everything must be specified. That’s why we believe most modeling-oriented BPMSs end up frustrating business analysts and forcing developers to write lots of Java code to implement simple processes.
Neihaus said that the new version is the first iteration of something the company intends to pursue: unification of the visual styles of creating models and process design. An example of this can be found at the Active Endpoints Web site, where the “bordered style” link in the second row of the table shows a screen shot of the new design. Neihaus adds:
It’s just a beginning, but our BPEL processes are beginning to look and feel like BPMN-style models, and vice versa. As we do more of this, we’ll obviate the debate, bring analysts and developers closer together and make it possible to deliver BPM applications more easily.
In other enhancements to the product, service level reporting provides a granular perspective into average process response time, giving users the opportunity to identify and respond to bottlenecks that affect overall process performance.

New reporting capabilities include a summary of the response time of the top ten activities of the process, allowing operations staff to optimize running processes. Rich scheduling helps define the frequency of process execution using any combination of months, weeks, days, hours, minutes and seconds.

ActiveVOS now allows Java objects to maintain state, significantly increasing the number of use cases that can be fulfilled using the POJO capabilities. Previously, developers could reuse POJOs as native web services, providing the ability to invoke stateless Java objects directly from a process.

In addition to the server operating systems, application servers and database systems previously supported, ActiveVOS is now additionally certified with:
The latest version of ActiveVOS can be downloaded from www.activevos.com. A free, 30-day trial is available and includes support to assist in the evaluation.

ActiveVOS 6.0.2 is priced at $12,000 per CPU socket for deployment licenses. Development licenses are priced at $5,000 per CPU socket. Pricing is also available for virtualized environments.

Monday, December 1, 2008

Interview: HP’s Tim Hall on heightening roles of governance in SOA, cloud and managing dynamic business boundaries

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

As enterprises scale up of their use of service oriented architecture (SOA), proper governance is providing an insurance effect. By deploying governance alongside and in sync with SOA development and deployment capabilities, enterprises are growing the use of SOA without stumbling -- allowing companies to “crawl, walk, and run” to SOA without losing control.

SOA governance heightens the business benefits of services, increases IT efficiency returns, and reduces the risk that complexity could undermine the services lifecycle in large organizations. Services governance also sets the stage for leveraging cloud and third-party services, while managing the boundary between internal and external services.

To unpack the relationship between SOA, governance, cloud and IT management, I recently interviewed Tim Hall, director of SOA Products for HP Software and Solutions.

Here are some excerpts:
The purposes of SOA governance has been to set the architectural vision and direction, lay the ground rules under which those activities are going to take place, and then foster collaboration between architects, and other people who engage in the processes of building solutions for companies, be they consumer focused, or be they within enterprise IT.

The whole thing is tracking your progress, where are you in this journey. It's not about installing a new pack of middleware and then declaring victory. You really have to measure along the way what you are doing, and how far you have gotten. Some measures that people start off looking at are things like reuse.

The whole notion of providing a service is to hide the layers of abstraction and to hide the complexity behind layers of abstraction, so that we can make changes behind the scenes that don't necessarily disrupt or alter the offering of the service. There are a lot of examples of this in the real world. Why hasn't IT been able to do a better job of capitalizing on those things?

This is one of those transformation opportunities. We're not just talking about Web services. We're talking about different ways in which we need to be able to flexibly compose and offer capabilities back to the business through a channel called a service. ... The adoption of services as a fundamental unit of commerce, if you will, within IT does something very fundamental to the way that people work together.

From HP's perspective, we are definitely trying to make sure that the collaboration between architects, quality assurance professionals, and operations personnel are there. That's kind of announcing that the various solution offerings that we're bringing to market are to make sure that none of these is an island. Those control points can reasonably be connected and allow for collaboration across all the different participants.

We're learning lots of interesting things about IT, and in particular, the ways that we can do things better. The whole notion of instilling an architectural vision to support change and flexibility; to give tools to the folks who are building composite systems, so they can better manage the roles and responsibilities for the various people that are participating in that; and better communicate with operations is something that we haven’t done very well.

It's really a matter of mapping your organizational maturity and what you're trying to achieve with the appropriate tools. People shouldn't be running out and buying tools, unless they really understand what problems those tools are going to solve, and the fact that certain organizations can introspect what they have done in the past and say what problems they want us to solve and or avoid.

The lessons that we're learning ... are specifically being applied to SOA right now, [but] have more far-reaching implications. As we look at things, like the different compositional patterns for systems that are coming -- Web 2.0 technologies, Ajax, rich Internet applications (RIAs), putting front ends on some of these things, or cloud computing -- all of these things are interrelated. My question is, should we not be applying these fantastic concepts and activities that we have been establishing through SOA governance more broadly to support all of these different types of next-generation composition?

From HP's perspective the answer is absolutely. The question is at what point are we going to be talking about next-generation application lifecycle management, or next-generation application composition and stop talking about SOA by itself as an island.

... There are more people coming to the table, more constituents coming to say, “How can I connect to these governance activities that are going on for services, but really for the purpose of generating some new business outcomes?” That, to me, is tremendously exciting.

I think one of the things you're going to see -- I'm not sure how far in the future, it's coming up more and more these days -- is an emphasis on understanding the business-to-business connections, or what some folks will call "federation."

I want to be very specific when I say "federation," because it is one of those overloaded terms that creates a lot of mystery. If we can take the wraps off of federation, what we're talking about is a pattern for how to expose the capabilities that I own within my domain to other domains. Those other domains could be within my organization, they could be elsewhere, or they could be third parties.

The good news is that SOA fundamentally supports that type of activity. The question is how well the tools support that activity today.

As we move into a more comprehensive cloud set of offerings, we're going to need to federate the different instances of services, metadata, their ownership, the consumption of those pieces, and really formalizing the relationships of using tools between the consumers and providers of those things.

When I say establishing relationships, I think about trading-partner agreements that get put in place, or supply chain agreements. They get put between supply chain partners about what information they're going to share and in what context they can use that.

We're really talking about doing the same kind of formalization with the consumption and providing of these various capabilities, in order for models like SaaS and cloud to scale up to the level that they need to in order to make a significant impact.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Monday, November 24, 2008

Enterprises can leverage cloud models and manage transition risks using service governance, says HP

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

Much has been said about cloud computing in 2008, and still many knowledgeable IT people scratch their heads over what it all really means. They want to know: How can enterprises best prepare to take advantage of this shift in IT resources -- but avoid transitional risks and uncertainty?

Cloud and on-premises compute grids exploit breakthroughs in technology architecture and the confluence of new business models, this is clear. In times when every dollar counts more than ever, the enticements to experiment with cloud models is powerful. At the same time, there is very little margin for error. Adopting new IT approaches can not injure the business interests, image or fiscal performance of any company.

To understand more about the balance and best practices route to emerging cloud and utility IT values, I recently spoke to executives at Hewlett-Packard (HP) and its EDS brethren. They point to the need to understand the role of services oriented architecture (SOA) and governance in exploring cloud opportunities. The future looks promising for cloud adoption, as long as there's a coordinated and managed approach.

To learn more about enterprise cloud adoption, please join the discussion with Rebecca Lawson, Director of Service Management and Cloud Solutions at HP; Scott McClellan, Vice President and Chief Technologist of Scalable Computing and Infrastructure in HP’s Technology Solutions Group, and Norman Lindsey, Chief Architect for Flexible Computing Services at EDS, an HP company.

Here are some excerpts:
Really, from an enterprise point of view, when running mission-critical applications that need security and reliability and are operating with service-level agreements (SLAs), etc., the cloud isn’t quite ready for prime time yet. There are both technical and business reasons why that’s the case.

As far as the idea of the cost savings, it’s good to look at why that is the case in a few certain areas, and then to think about how you can reduce the cost in your own infrastructure by using automation and virtualization technologies that are available today, and that are also used in the “cloud.” But, that doesn’t mean you have to go out to the cloud to automate and virtualize to reduce some cost in your infrastructure.

The cloud is an evolution of other ideas that have come before it, grid, and before that Web services. All these things combine to enable people to start thinking of this as delivering service with a different business model, where we are paying for it by the unit, or in advance, or after the fact.

Virtualization and these other approaches enable the cloud, but they aren’t necessarily the cloud. What IT departments have to do is start to think about what is it they’re trying to accomplish, what business problem they’re trying to address, as they look at cloud providers or cloud technologies to try and help solve those problems.

We’ve seen people do their own private utilities versus public utilities such as flexible computing services provide. The idea of a private utility is that, within an organization, they agree to share resources and allow the boundaries to slide back and forth to hit the best utilization out of the fixed set of assets or maybe a growing set of assets.

The same idea is in a public utility or a public cloud, except that now a third party is providing those assets and providing that as a service. It increases the concerns and considerations that you have to bring to the party. You have to think about problems that you didn’t have to think about when you had a private utility.

When you go to a public space, security is paramount. What do I do with my proprietary information and service levels? How certain can I get what I need when I need it? The promise with the cloud is great, but the uncertainty has caused people to come up short and decide maybe it’s better if I do it myself, versus utilizing an outside service.

We need to think in terms of which services provide what level of value, based on the complexion of that particular company -- and it’s never going to be the same for all companies. Some companies can use Google Gmail as an email service. Other companies wouldn’t touch it with a 10-foot pole, maybe for reasons of security, data integrity, access rights, regulations, or what have you. So weighing the value is going to become the critical thing for IT.

In the longer term, the more overarching impact of cloud comes when your IT department can deliver value back to the business, rather than just taking cost out. Some examples of that are using aspects of social networking and other aspects of cloud computing, and the fact that cloud is delivered over ubiquitous media, the Internet, to increase share of wallet, increase market share, maybe bring higher margin to a business, and build ecosystems, and drive user communities for a business. That’s where cloud brings value to a business and that’s obviously important.

You can start to look around at your internal capabilities, versus external, and make some decisions as to how you want to solve that problem, whether buying an external service or creating a service internally and delivering it to your customers with your own internal utility. ... This will force IT to come closer to the people in the business and really understand what is the business objective, and then find the right service that maps to the value of that objective. Again, we can’t emphasize it enough. This should really change behavioral dynamics in IT and how they think about what their job is.

Basically, within the spectrum of things that are cloud computing, you have everything from infrastructure as a service … all the way up through virtualized infrastructure, a platform on top of that, an application on top of that, or perhaps a completely re-architected true cloud-computing offering.

As you move up that spectrum, I think the benefits increase, but in not all cases are the application domains available in all of those environments. ... What services are available through some cloud model, what model of availability, what are the characteristics of that model, what are the requirements for that particular service – and what are the security performance, continuity integration, and compliance requirements? Those all have to be taken in holistically and through a governance model to make the decision whether we are going to move from the traditional deployment model to a cloud-delivery model, and if so, which one.

In the process of getting to a service-centric IT governance model, they’re going to have to deal with the governance model for deploying new services. Again, I think risk is partly a function of benefit. So when there is a marginal benefit or when the stakes are very high, you would want to be very conservative in terms of your risk profile.

The tougher economic conditions would heighten the acceleration of cloud computing, and not just because of the opportunity to save cost. Reinforcing what we brought up earlier, there are some clear opportunities to bring value to your business.

Examples of that are things like being able to drive user communities, users and consumers of whatever it is your business produces, using techniques of social networking, and things like that.

There is the question of how to use the advantages you get from cloud computing to drive differentiation for your business versus your competitors, because they’re hesitating, or not using it, because they’re being risk-averse. In addition, that compliments the benefits you get from cost savings.

What I really meant is that, if you are an IT shop and you are trying to decide what to move to a cloud paradigm or a cloud model, you’re likely to really focus on the places where either you can get that big win -- because moving this particular service to a cloud paradigm is going to bring you some positive differentiation, some value to your company.

Or, you are going to get that big cost savings from the places where it's the most mission-critical -- the place where you have the least tolerance for downtime, and you have the greatest continuity requirements, or where the performance SLA has been most stringent. The thinking may be, “Well, we’ll tackle that later. We’re not going to take a risk on something like that right now.”

In the places where the risk is not as great -- and the reward either in terms of cost or value looks good -- the current economic conditions are just going to accelerate the adoption of cloud computing in enterprises for those areas. And they definitely do exist.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

For more information on HP Adaptive Infrastructure, go to: www.hp.com/go/ai/.

Clickability offers enhanced Web content management through SaaS media repository

Clickability, a Web content management (WCM) provider, has announced its Clickability Media Solution, designed to provide a centrally managed software-as-a-service (SaaS) content repository aimed at large media companies. The new offering will allow companies to manage all of their content for multiple Web channels from the single repository.

San Francisco-based Clickability currently partners with many of the world's largest media companies to develop revenue-producing online solutions, allowing them to manage large traffic spikes with no investment in hardware or software. The agility that comes from the WCM system allows companies to experiment and innovate without incurring additional cost.

To me, Clickability offers what really should be though of as cloud publishing and advanced media services. Consider that the more media firms that use common services providers like Clickability, the more they can gain common services -- including advertising or even lead generation.

Media companies can offload a lot of their publishing distributions and support services (used to be called circulation), and focus on the content, the audience and the media monetization model. Why should each publisher or title have their own web infrastructure?

In fact, Clickability provides service oriented architecture (SOA) for media companies, and via a SaaS and cloud model, no less. I'm beginning to see that getting to SOA via cloud and SaaS may become more common as economic conditions deteriorate. The cost-benefit analysis simply becomes too compelling. We've seen this approach work with blog publishing, but I think the model runs much deeper and wider.

What's more, under the media cloud model, the infrastructure provider can keep offering new services, such as social networking, advanced semantic search, mobile access, location-based services -- all at a fraction of what each publisher would need to spend to acquire such services on their own.

I also look forward to the day when cloud models start to properly analyze the audience, gain meta data inference on their needs and wants, and provide the information relevance that joins need to solution. At that inception point lies a whiole new business model -- better than advertsing, less costly that traditional lead generation.

Those days are coming and Clickability strikes me as a strong contenter for redefining media based on common infrastructure, lower total costs, and more granular services. Let the media firms produce the content and know their audiences best, while the infrastructure provider handles the common services and explores how to move to transactional-based monetization.

For now, with Clickability Media Solution, companies can begin this cloud ecology journey by tagging and annotating content for efficient search and reuse. They can also link and share content assets across channels and publications. The single repository lets companies create highly targeted micro sites or regional portals that rely on metadata to automatically populate them with appropriate content and contextual links.

Included in the solution are interactive features, such as social networking, blogging, video serving, ticketing, personalized calendars, site customization, and an on-demand ad server that ties to specific pages and sections in a site. It also integrates with a company's existing video platform.

Because it's a SaaS environment, customers can enhance their site and make improvements without spending time on writing new software or installing a dedicated infrastructure. This agility allows companies to take advantage of new opportunities in the market.

Clickability Media Solution
is available immediately.

Tuesday, November 18, 2008

Changing business landscape makes identity and access management key to IT security

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

In an age of significant layoffs and corporate restructuring, the burgeoning problem of identity and access management for IT operations and data centers has escalated into a critical security issue. Managing who gets access to which resources for how long -- and under what circumstances -- has become a huge and thorny problem.

Improper and overextended access to sensitive data and powerful applications can cause massive risk as many employees find themselves in flux.

To learn more about how enterprises can begin coordinated identity and access management strategies, BriefingsDirect's Dana Gardner spoke with Dan Rueckert, worldwide practice director for security and risk management in HP’s Consulting and Integration group; Archie Reed, distinguished technologist in HP’s security office in the Enterprise Storage and Server Group, and Mark Tice, vice president of identity management at Oracle.

Here are some excerpts:
When we look at identity and access management (IAM), we are really saying that the speed of business is increasing, and with that the rate of change of organizations to support their business. You see it everyday in mergers and acquisitions that are going on right now. As a result of that, you see consolidation.

All these different factors are going on. We are also driving regulations and compliance to those regulations on an ongoing basis. When you start to go with these regulations, the ability to have people access their data, or have access to the tools, applications, and data that they need at the right time is key.

The reality in the market is that many things impact that security posture, internally, every time a new system is installed, any product or service defined, or even when a new employee joins. Externally, we're impacted by new regulations, new partnerships, new business ventures, whatever form they may take. All those things can impact our ability, or our security posture.

Security is much like business. That is, it’s impacted by many, many factors, and the problem today is trying to manage that situation. When we get down to tools and requirements around such things as identity management, we are dealing with people who have access to systems. The criticality there is that there have been so many public breaches that we have become aware of recently that security again is a high concern.

When we start thinking about security, one of the first things that people look at generally is some sort of risk analysis. As an example, HP has an analysis toolkit that we offer as a service to help folks decide what is critical to them. It takes all sorts of inputs, the regulations that are impacting your business, the internal drivers to ensure that your business not only is secured, but also moving in the right direction that you wanted to move.

Within this toolkit, called the Information Security Service Management (ISSM) reference model, is a set of tools where we can interview all of the participants, all of the stakeholders in that policy or process, and then look at the other inputs that are predefined, such as the regulations.

[The solution] is definitely people, process, and technology coming together. In some cases, it’s situational, as far as working with customers that have legacy systems, or more modern systems. That starts to dictate how much of that process, how much of that consulting they need, or how much technology?

When we talk about the HP-Oracle relationship, it’s about having that strong foundation as far as IAM, but also the ability to open up to the other areas that it's tied into, in this case enterprise architecture, the middleware pieces that we want for databases, and other applications that they have.

You start to put that thread with IAM, combined with an infrastructure and that opens this up as a whole, which is key. And, enablement, as far as depending on the size and complexity or localization or globalization, tends to play into those attributes, as far as people process and technology.

Even in the virtualization space, where everybody is trying to get more from the same hardware, you cannot ignore things such as access control. When you bring up who has access to that core system, when you bring up who has access to the operating system within the virtual environment, all of those things need to be considered and maintained with the right business and access controls in place.

The only way to do that is by having the right IAM processes and tools that allow an organization to define who gets access to these things, because important processing is happening on the one box. You are no longer just securing the box physically. You're securing the various applications that are stacked on top of all of that.

One of the things that we really work hard to do is make sure that first off, before breaking ground on one of these projects, customers put in place a complete framework, or architecture for their security in identity management, so that they really have a complete design that addresses all of their needs. We then encourage them to take things on one piece at a time. We design for the big bang, but actually recommend implementing on a piece by piece basis.

By having these things that are predefined, not only in terms of being more prescriptive for companies, which helps them a lot, but also being more accessible in terms of how quickly they can decide what's important, allows them to move on and decide in which order they’re going to implement their security strategy.

Those sorts of things allow a company to get up to speed quickly and analyze where they’re at. You may have a security review every year, but a lot of companies need to do it more often in more isolated ways. Having the right tools come out of these sorts of things allows them to do ongoing assessments of where they’re at, as well.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

For more information on HP and Oracle Identity and Access Management.

For more information on HP Secure Advantage.

For more information on HP Adaptive Infrastructure.