Monday, May 2, 2011

Learning the right lessons from the Amazon cloud outage


This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

Have you noticed that ZapThink’s crystal ball has been working overtime? We sounded the warnings about cyberwarfare mere days before the Stuxnet worm hit. Then we predicted the fall of enterprise architecture frameworks right before the Zachman organization imploded. Next, we heralded a secondary market for IP addresses as the IPv4 space ran out of them. Sure enough, that secondary market is now here. And last week, we warned against putting all your eggs in any one cloud provider’s basket. Sure enough, Amazon’s public cloud went belly up immediately afterward. All I can say is that if we make a prediction that will impact your business, you’d better take heed!

In all seriousness, there’s no supernatural clairvoyance at work here. What you’re seeing is the power of the ZapThink 2020 vision for Enterprise IT, which delineates the interrelationships among the numerous trends in the IT marketplace. Just as the best psychics are in reality masters at picking up subtle clues in human behavior, we’re tuning into the complex subtleties that the multiple forces of change in our midst present to us.

One of the primary insights of the ZapThink 2020 vision is that individual trends, let alone single events, should never be taken in isolation. This insight is particularly useful when a crisis like the Amazon crash presents itself.

At this point in time, we’re experiencing a backlash from this crash. People are reconsidering the wisdom of moving to the cloud, and in particular, public clouds. Perhaps the large infrastructure vendors who were warning their customers about the security and reliability issues with public clouds in order to sell more gear to build private clouds were right after all?

Not so fast. If we place the Amazon crash into its proper context, we are in a better position to learn the right lessons from this crisis, rather than reacting out of fear to an event taken out of that context. Here, then, are some essential lessons we should take away from the crash:
  • There is no such thing as 100 percent reliability. In fact, there’s nothing 100 percent about any of IT—no code is 100 percent bug free, no system is 100 percent crashproof, and no security is 100 percent impenetrable. Just because Amazon came up snake eyes on this throw of the dice doesn’t mean that public clouds are any less reliable than they were before the crisis. Whether investing in the stock market or building a high availability IT infrastructure, the best way to lower risk is to diversify. You got eggs? The more baskets the better.
  • This particular crisis is unlikely to happen ever again. We can safely assume that Amazon has some wicked smart cloud experts, and that they had already built a cloud architecture that could withstand most challenges. Suffice it to say, therefore, that the latest crisis had an unusual and complex set of causes. It also goes without saying that those experts are working feverishly to root out those causes, so that this particular set of circumstances won’t happen again.


    Just because Amazon came up snake eyes on this throw of the dice doesn’t mean that public clouds are any less reliable than they were before the crisis.

  • The unknown unknowns are by definition inherently unpredictable. Even though the particular sequence of events that led to the current crisis is unlikely to happen again, the chance that other entirely unpredictable issues will arise in the future is relatively likely. But such issues might very well apply to private, hybrid, or community clouds just as much as they might impact the public cloud again. In other words, bailing on public clouds to take refuge in the supposedly safer private cloud arena is an exercise in futility.

  • The most important lesson for Amazon to learn is more about visibility than reliability. The weakest part of Amazon’s cloud offerings is the lack of visibility they provide their customers. This “never mind the man behind the curtain” attitude is part of how Amazon supports the cloud abstraction I discussed in the previous ZapFlash. But now it’s working against them and their customers. For Amazon to build on its success, it must open the kimono a bit and provide its customers a level of management visibility into its internal infrastructure that it’s been uncomfortable delivering to this point.

The ZapThink take

Abstractions hide complexity from consumers of technology, but if you do too good a job hiding the underlying complexity, then the abstraction can backfire. But that doesn’t mean that abstractions are bad; rather, you need different abstractions for different audiences.

The latest crisis impacted a wide swath of small cloud-based vendors, from Foursquare to DigitalChalk to EDU 2.0. These firms’ customers simply wanted their tools to work, and were disappointed and inconvenienced when they stopped working. But the end-user customer may not have even been aware that Amazon’s cloud was behind their tool of choice. Clearly, those customers wouldn’t find better visibility into the cloud particularly useful.

No, it’s the technology departments at the small vendors that require better visibility. They are the people who require management tools that enable them to gain a greater level of control over the cloud environments they leverage in their own products. Once Amazon supports such management tools, then Amazon’s customers will be better able to provide the seamless abstraction to the cloud end user, who simply wants stuff to work properly. And there’s nothing supernatural about that!


This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.




SPECIAL PARTNER OFFER


SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.



You may also be interested in:

Thursday, April 28, 2011

Master IT support providers Chris and Greg Tinker's take on how integrated technical support is essential in a complex, cloudy world

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. View the blog.

As recent outages at Amazon Web Services and Sony PlayStation Network jar the common perception of IT business as usual, IT failures and performance snafus are nothing new, just perhaps much more prominent.

Someone, somewhere got the first call on those outages -- the front line IT technical support staff. And the expanding role of cloud and the online services ecosystems that more of us depend on only point up why such IT technical support is more important than ever.

It just so happens that the importance of good and fast support is forcing technical support industry changes, with an emphasis on integration and empowerment for improving how help desks respond and perform in a spiraling crisis.

To learn more about how support is adapting to the high-impact, high-exposure cloud era, BriefingsDirect recently interviewed two lauded IT Master Technologists from HP. Part of the new support philosophy comes from providing a more centralized, efficient, and powerful means of getting all the systems involved working, and all the knowledge necessary together quickly to get applications back in action and keep them there. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

These two support stars, Chris Tinker and Greg Tinker, both HP Master Technologists, who happen to be identical twins, were chosen via a recent sweepstakes hosted by HP to identify favorite customer support personnel. Learn here why they gained such recognition, and uncover their recommendations for how IT support should be done better in a rapidly changing era of increasingly hybrid and cloud-modeled computing. The two were interviewed by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: You deal with people when they are, in some cases, their darkest hour. They're under pressure. There's something that's gone wrong. They're calling you. So, you're not just there in a technical sense, which of course is important, but there must be a human dynamic to this as well. How does that work?

Chris Tinker: We become their confidant. We foster a relationship there between the two parties. For us, it's very exhilarating. It's the ultimate test. You want to build both the technical and business, but also the interpersonal relationship, because you have to weigh in on so many levels, not just technical. That’s a critical component, but not the only component.

Greg Tinker: And today the customer expects the technical master technologist, like my brother and I, not just to know the one thing they're asking about, because that question is going to quickly turn. For example, I am having an Oracle performance issue, the customer thinks it may be disk related, but when you dig into it, you find out that it's actually an ODBC call, a networking issue. So, you have to be quite proficient at a multitude of technologies and have a lot of depth and breadth.

Gardner: So what does it take to be a good IT support person nowadays?

Chris Tinker: It’s simply not enough to be a technical guru -- not in today's industry. You have to have a good understanding of technology, yes, but you also have to understand the tools and realize that technology is simply a tool for business outcomes. If you're listening to the business, understanding what their concerns and their challenges are, then you can apply that understanding to their technical situation to essentially work for a solution.

Greg Tinker: Chris and I study, almost on a daily basis, to stay ahead of the technology curve. Chris and I both do a lot in SCSI I/O control logic, with respect to the kernel structure of HP-UX as well as Linux, which is our playground, if you will.

And, it takes what I would call firm foundation to be able to provide that strong wealth of knowledge to be the customer's confidant. You can't be an expert at one point anymore. You can't be a network expert only. You have to understand the entire gamut of the business, so that you can understand the customer's technical problem.

Gardner: Let me congratulate you your award. This was I think a worldwide pool, or at least a very large group of people that you were chosen from. Did this come as a surprise?

Greg Tinker: It was an honor, I can say that, and we are very grateful for that. Our customer installed base, as well as our peers and the management team, put our names into this situation. It was a great honor. ... For each vote that was cast, HP donated $10 to the humanitarian organization Care, to max out at a $100,000. They met that goal in just a few days. It was quite astonishing.

Chris Tinker: And it was a surprise. ... Very rewarding.

Gardner: Okay, you've been at this for 12 and 13 years. What's changed over that period of time?

Chris Tinker: Catchphrases change. Today it's cloud computing, but cloud computing has been around for a long time. We just didn’t refer to it as cloud computing. Shared infrastructure of course is what we called it.

Virtualization today is becoming a big ticket item, where in years past, big iron was the thing that was a catchphrase. Big iron was very large computers. We still have big iron in storage, that’s true. We still have that big footprint, big powerhouse, that consumes a lot of power, but that’s a necessity of the storage platform.

The big thing for today is converged infrastructure. These are terms you wouldn’t have heard years ago, where we are trying to converge multiple type of protocols, physical media under one medium, networking, Fibre Channel, which of course is your storage network, TCP/IP network, going across the same physical piece of media. These are things that are changing, and of course with that comes extreme amount of complexity, especially when it comes into the actual engine that drives this.

Greg Tinker: As Chris stated, the key phrase of yesteryear was big iron. I want a big behemoth machine that can outdo mainframe. If you look back to 1999 and 2000, what you were looking for in the open system world was something to compete with Big Blue.

Today it's virtualization and blades. Everybody used to say -- probably about mid-2005 -- "I want a pizza box. I want a new blade." We no longer call those blades. Those are called pizza boxes now. Today, the concept is all about blades. If you can't make the thing 3 inches tall and 1 inch wide, there is something wrong.

Gardner: You've been describing how things have changed technically. How have things changed in terms of the customer requirements and/or the customer culture?

Chris Tinker: The expectation is more for less. They want more computing power. They want more IT for less cost, which I think that’s been true since day one, but today, of course, that "more for less" just means more computing power. The footprint of the servers has changed.

And two, the support model has changed. Keep in mind, we're in support, and we're seeing a trend with these concepts where customers are having all these physical servers and the support contracts on all these servers are being consolidated down to one physical server with virtual instances.

The support model of yesteryear doesn’t always fit the support model that they should have today.



The support model of yesteryear doesn’t always fit the support model that they should have today.

Greg Tinker: What Chris is talking about there is consolidation efforts. Customers used to have 500 servers. Today, -- I want to exaggerate my point here -- we have it on a virtualization of one or two physical machines that are behemoth and it's virtualized 500 guests.

Though that model works right for consolidating the cost effort of the infrastructure, so your capital cost is less, the problem now becomes the support model. Customers tend to reduce the support as well, because it's less infrastructure. But, keep in mind, most customers kind of forget a lot of times that they've put all their eggs into the one basket, and that basket needs a lot of protection.

So now you have your entire enterprise running on one or two pieces of physical hardware that is a grossly complex with not only the virtual servers, but the virtual Ethernet modules, the Fibre Channel model concepts are all now basically one concept to run every protocol type, whether you are running infiniband, Gigabit Ethernet, Fibre Channel, etc., the complexity requires a great deal of support.

When a customer calls up and says, "We've made a change in our environment and my server has crashed, the physical server went down, or has lost access to its storage or network," you're not just affecting that one physical server, but you're affecting hundreds. So, the support model today is quick.

Gardner: It sounds to me that there is a higher risk profile. Is that a fair characterization?

Hardware redundancy

Greg Tinker: That would be a fair characterization. There is a higher risk on the hardware end in the sense that you still have hardware redundancy, of course, but you're fully dependent upon cluster technology and complexity.

Chris Tinker: A good solution design for business risk assessments are still a critical component to your solution design.

Gardner: I'm going to guess that over the past several years in the tradeoff for cost and risk, people probably favor the cost side a bit. So, that means the people in your position are the backstop?

The new light today is that customers are focused more on the higher end support models, meaning four-hour call to repair.



Greg Tinker: That’s what the trend is becoming. The trend is, "We're going to reduce our cost in the CAPEX and reduce our cost in the infrastructure. We're going to consolidate and virtualize that concept, and we are going to look at our support strategy in a different light." That’s what most customers think.

Gardner: What is that new light?

Greg Tinker: The new light today is that customers are focused more on the higher end support models, meaning four-hour call to repair, where it used to be 24-hour or 48-hour support models, where we were not in a huge rush. If we had a disk drive failure, we had plenty of time, because we had full redundancy, whatever. So we had plenty of time to fix those components.

Today, with all this consolidation effort, it becomes a real critical need when you have a failing component, whether it be hardware or software, to get that component addressed urgently. You don’t really have the time.

Chris Tinker: That’s a great point. Looking at that standard support model, you had so many physical servers and your business was essentially interlaced with these systems. You could handle an outage, whether software or hardware condition. It wasn't as strategic or as strong as today’s virtualized environments, where you would have much heavier business impact.

To Greg’s point, this inter-support model used to work with some of these virtualized environments. I am not saying all virtualized environments, but some of these virtualized environments. With four-hour call-to-repair, you can imagine in four hours what’s required. The technologists who answer the phone first have to address the business concerns to figure out what the business impact is and understand what the problem is.

Once we ascertain what’s causing that problem and the problem has been defined, we have to figure out what’s going wrong with the technology in order to bring it back online. All that has to be done within four hours on some of our most critical contracts.

Gardner: You're sorting through implementations with loads of vendors involved. When it comes to this sort of a mission-critical situation, they're probably thankful that there's someone there trying to corral this. So, I imagine the cooperation is pretty high in these circumstances?

Stakes are high

Chris Tinker: Yeah, the stakes are high at this level. You are talking about, not only the corporation, the customer, but you are also talking about the vendors, whether it be HP or third party, and we are partnering with all these vendors. Everybody has got a stake in the game. Essentially, their reputation is on the line.

So we partner, regardless. As we don’t want to be thrown under the bus, we don’t throw anybody else under the bus. We partner. We come together as one throat to choke or one hand to shake, however you want to look at it. But, essentially, we all have the same thing in common, the customer’s well being.

Greg Tinker: I'll second Chris’ sentiment on that, in the sense that when we're engaged at our level, it's no longer a finger-pointing game. It's a partnership, regardless of who the customer is. If it's HP gear, so be it. If it's somebody else’s gear, and we see where the problem is at, we don't point the finger. We ask the customer to get their vendor on the bridge with us and we work as a team to get the business restored, because that’s priority one.

Chris Tinker: That’s HP technical support. That’s what we thrive at. That’s one of our charters. Our management has dictated that they want team effort, global effort.

Gardner:
How did you both get involved with this? Did one get into it first and the other follow? What's the story behind how you ended up here?

Lengthy road

Greg Tinker: It was quite a lengthy road. Chris and I actually started off going in one direction, and we agreed many years ago in school that one of us would go one direction and the other in another, and see who was enjoying the industry better. Chris joined HP and fell in love with it. He and I have a very strong Linux background. Then, I jumped ship and went with my brother Chris, and we have been with HP ever since, and have loved it dearly.

Chris Tinker: We look at IT support as a ladder and we just climbed that ladder. We started in mission-critical support and found it to be exhilarating. With mission-critical support you're talking about enterprise-class corporations. We're not talking about consumer products. We're talking about an entire corporation's business running on an IT solution and how we're engaged in that process.

Unfortunately, in our line of work, we do see customers, where the technology did not go as planned, predicted, or expected and it's up to us to essentially figure out what the expectations are with technology and ascertain whether or not the technology can deliver that. That's how we moved through support.

We started off as mission-critical support specialists. We became architects, designing solutions for corporations and found out that we were very good at escalations and that's where we are today.

Gardner: There have also been some shifts over the past dozen years or so in the degree to which remote support is possible and your ability to get inside and get that information. Maybe we could take a moment to learn more about what tools have been brought to bear to help you with this?

HP virtual room

Chris Tinker: The HP Virtual Room (HPVR). If you go to rooms.hp.com, it’s a good example. As you just mentioned, yesteryear it was, "Hey, send me the logs. Send me the examples. Send me some data, and I'll parse through it and figure it out." You had to wait for data to come in and then start parsing those logs, parsing that data, and building your hypothesis of what might be the problem.

Now, imagine if I were able to take that in real time. So, Greg, talk about real time.

Greg Tinker: Real time is key in today’s technology world. Nobody wants to wait. Take your phone for example. Can you stand it when you have pressed the email button and your phone takes more than three seconds to load it up? Everybody gets annoyed when it's slow. Well, the same is true in technology services support.

When customers call in, they expect immediate response. By the time it gets to our level, where Chris and I sit and our team resides inside the support model, the customer is in dire straits. We use the Virtual Room technology. It's similar to WebEx.

There are a lot of similarities out there. Different vendors have different tools. We use the HP Virtual Room toolset and we can jump onto any machine in the world, anywhere in the world, at a moment’s notice. We can do crash analysis on a Linux kernel crash in real time on a customer’s machine. The same with HP-UX, Solaris, AIX, name your favorite.

We can look at these stack traces and actually find the most likely component that compromises the infrastructure. We can find it, isolate it, and remedy it.



We can look at these stack traces and actually find the most likely component that compromises the infrastructure. We can find it, isolate it, and remedy it.

Chris Tinker: Not only is it just us troubleshooting, but it's bringing to bear our peers. It's team work, a two-heads-are-better-than-one mentality. Greg even lived that first. At the end of the day, you've got 2, 4, or 20 people on the phone. You can imagine all of those people sharing the same desktop at the same time to try to look at a problem. You get all these different levels of expertise.

You're able to take all these talents and focus them on one scenario. So, now with four-hour call to repair, how is that even possible? It's possible when we have to bring these people and partner with these people. They could be not only HP employees and HP technical support. That goes back to vendors and those relationships. We bring those vendors into the same Virtual Room, showing them where we're seeing the problem and asking what we need to do to solve this.

Gardner: While we are on the subject of tools, what's coming next? If I were to design these types of tools, you would be the guys I would go to, to get my list of requirements. What are you asking for?

Greg Tinker: The biggest thing we see today is storage. The growth rate of storage is enormous. And the biggest problems customers run into are performance and capacity.

Capacity is the easy one, right? I am 100 percent full in my file system. I just need more. That's the easy one to fix.

The hard one to fix is "My application is not running the way I want it to, Fix it." Those are the difficult ones. We have to have a lot of tools to help us understand what the load conditions are, because it's no longer the yesteryear scenario of a Superdome, HP Rack, one big behemoth machine, four terabytes of memory, 400 CPUs, loading up one storage array. That's no longer the case.

We have grid computing structures of 600+ nodes running a multitude of different things -- SAP, Oracle, Informix, Exchange, etc. All of these different load-bearing concepts are coming into one monolithic storage array. It can become quite daunting to understand what's causing that load condition, and we have a lot of tools today that are helping us ascertain the root of those problems faster.

Chris Tinker: We have become the bleeding edge of technology. Essentially, it's software that hasn't been released. It's tools which are not actually production ready, and we use these tools as well, and some tools we can’t even speak about.

Business realities

B
ut, these are tools that will be in the enterprise eventually. They will be out in the world eventually. You asked earlier what we see coming down the road? Imagination is essentially one of the only things in technology. In today's world, there are other factors of course. Business realities temper the development of technology, but it's going to be very exciting to see what technology is being developed and what's coming next.

Gardner: I wonder if you might have just some last advice for those listening to the podcast as to how they on the consumption side might help folks like you on the services and support delivery side do your job better? What advice do you have for them in order to have a better outcome?

Chris Tinker: Yeah, it's being able to articulate the actual problem at hand, and the challenge that you have with your technology, because keep in mind that technology, IT, is nothing more than a tool that allows us to have business outcomes. So it's nothing more than a tool that the business utilizes for their requirements.

Then, to have metrics around their environment. They have to have a baseline. They have to have an understanding of what the technology has been doing.

Trending is key

Greg Tinker: Trending is key in a lot of these new virtualized consolidated environments. You need to have a baseline, as Chris stated. We need to have the performance characteristics. Your logging and ESX is about as common as sliced bread in a grocery store. ESX environments are very common and thought of very highly. I enjoy them. They are very nice.

Customers tend to start moving towards ESXi, which is fine, but ESXi doesn't log. It does log but you only get like a two hour history. The point is that customers take that logging for granted. You have to have your logging enabled and you must keep at least a six month trend.

So you don't keep all your logs and your service forever, but a six month trend is very helpful when you have a mysterious problem show up. Then, we can compare yesterday to today and see what differences have shown up in the environment.

Gardner: It comes down to data, having the data at your disposal.

Chris Tinker: Not just data, but having a baseline. We get a lot of calls where customers have no idea of what the environment was doing before. They say, "We're having a problem now. Our users are complaining." We ask, "How did it used to run? How long did this job used to take? Did it use to take 2 hours, and now it takes 20 hours?" A lot of times, they simply do not know.

I wish customers would yield to knowing that logging is critical. You don't have to keep it forever, but keep it for a strategic period of time. Six months is a good number.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. View the blog.

You may also be interested in:

Friday, April 22, 2011

Cloud brokering: Building a cloud of clouds


This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

T
he essence of cloud computing—what makes a cloud, well, cloudy—is the fact it’s an abstraction. The cloud hides underlying complexity while presenting a simplified interface to the consumer. Of course, abstractions are nothing new in the world of IT: compiled languages, graphical user interfaces, and SOA-based Business Services are all examples of abstractions. After all, everything we’re doing in IT boils down to zeroes and ones in the end. Layers of abstraction are how we deal with this never-ending stream of bits.

The business service abstraction in the SOA context provides flexible, loosely coupled access to application functionality and data. The cloud abstraction, on the other hand, delivers a shared pool of configurable computing resources of various types (processors, storage, etc.) that can be dynamically and automatically provisioned and released. The two approaches solve different problems, but nevertheless both simplify the underlying technical complexity while providing greater agility to the consumer of the respective abstractions.

It doesn’t matter how resilient your provider’s infrastructure is if they go out of business, or a denial of service attack takes them off the Internet.



Another critical benefit of both abstractions is increased fault tolerance. If something goes wrong beneath the abstraction, then it should be possible (at least in theory) to fail over to a backup or route around the problem without adversely impacting the consumer. In the case of business services, the intermediary (typically an ESB or XML appliance) handles this routing, while the underlying cloud provider infrastructure handles failover within the cloud.

That is, unless the problem is with the cloud provider itself. It doesn’t matter how resilient your provider’s infrastructure is if they go out of business, or a denial of service attack takes them off the Internet. Think of cloud providers as baskets. Do you really want to put all of your eggs in just one?

Enter cloud brokering

C
loud brokering is the capability that addresses this eggs-in-one-basket problem. A cloud broker provides cloud service intermediation, aggregation, and arbitrage across a set of cloud providers. The need for such cloud brokers, of course, is not lost on the community of cloud startups. Today, if there’s even a hint of a niche you’ll find several entrepreneurs jumping on it, and the nascent cloud broker market is no different. However, there is a twist to the current state of the cloud broker market: as far as I can tell, all the players in this space today include cloud brokering as an extension of their existing business model, rather than a pure play model in its own right.

In fact, most of the vendors offering cloud brokering are in the cloud management space. RightScale and Kaavo, for example, provide template-based cloud deployment. Build the template, and the tool will deploy your fully configured cloud instance in any of a number of cloud environments by following the template. CloudSwitch takes the template idea down a few notches to layer two of the OSI stack, which means your cloud instances will be identical down to the IP addresses and even the MAC addresses, independent of the cloud environment. A fourth player worth mentioning is enStratus, who touts cloud independence as part of cloud governance.

All the players in this space today include cloud brokering as an extension of their existing business model, rather than a pure play model in its own right.



There is another angle on the cloud brokering marketplace, however: as an extension of the cloud storage/sync market. This niche is already quite crowded, with players like Dropbox, Jungle Disk, Box.net, Wuala, and several more. A closely related market niche is the cloud backup market, featuring vendors like Mozy, Backblaze, Carbonite, CrashPlan, and Livedrive, to name a few. It’s not clear, however, if any of these vendors support cloud brokering. Instead, they all rely upon a single underlying cloud environment for each of their offerings. The inherent fault tolerance of each vendor’s chosen cloud infrastructure may be sufficient for many users, especially in the consumer and small business segments, but enterprises may require a higher degree of resilience.

One vendor, however, has apparently carved out a niche for themselves: Oxygen cloud. Oxygen cloud focuses on cloud-based sync and shared storage, but they have also taken the extra step to build cloud brokering into their offering. As a result, customers who want the benefits of sync and storage in the cloud without having to rely on a particular cloud provider have few if any options other than Oxygen cloud.

The ZapThink take

The ability to select among several public clouds is only one benefit of cloud brokering. It also supports the ability for an organization to move application instances or data between private and public clouds. In other words, cloud brokering is at the heart of dynamic hybrid clouds.

When we talk about the various cloud deployment models—public, private, community, and hybrid—it’s the hybrid model that elicits the most head scratching. People wonder under what circumstances would it ever be worth the trouble to mix private and public clouds together. And they have a point: hybrid clouds sound like a huge hassle. Without cloud brokering, managing a hybrid cloud may be more trouble than it’s worth.

Cloud brokering, however, abstracts out the deployment model altogether, creating what we might even call a “cloud of clouds.” From the perspective of the consumer, all clouds might as well be hybrid clouds, because the decision whether to leverage on-premise or off-premise resources is simply part of the dynamic provisioning benefit of the cloud of clouds. The notion of a cloud of clouds that brokering enables, however, is a temporary phenomenon. Today we require visibility into the selection of individual cloud providers. Tomorrow, the brokering-based cloud of clouds will simply be the cloud.

ZapThink has no business relationship with any of the vendors mentioned in this ZapFlash. We’re simply calling ‘em like we see ‘em.


This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

You may also be interested in:

Tuesday, April 19, 2011

Tag-team of HP workshops provides essential path to IT maturity assessment and a data center transformation journey

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

The pace of change, degrees of complexity, and explosion around the uses of new devices and increased data sources are placing new requirements and new strain on older data centers. Research shows that a majority of enterprises are either planning for or are in the midst of data center improvements and expansions.

Deciding how to best improve your data center is not an easy equation. Those building new data centers now need to contend with architectural shifts to cloud and hybrid infrastructure models, as well as the need to cut total cost and reduce energy consumption for the long-term.

An added requirement for new data centers is to satisfy the needs of both short-and long-term goals, by effectively jibing the need for agility now with facility and staffing decisions that may well impact the company for 20 years or more.

All these fast-moving trends are accelerating the need for successful data center transformation (DCT). As a means to beginning such a DCT journey, to identify some proven ways that explore how to do DCT effectively, BriefingsDirect now examines two ongoing HP workshops as a means of accurately assessing a company’s maturity in order to know then how to best begin and take a DCT journey.

Join BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions, as he interviews three HP experts on the Data Center Transformation Experience Workshop and the Converged Infrastructure Maturity Model Workshop: Helen Tang, Solutions Lead for Data Center Transformation and Converged Infrastructure Solution for HP Enterprise Business; Mark Edelmann, Senior Program Manager at HP’s Enterprise Storage, Servers, and Network Business Unit, and Mark Grindle, Business Consultant for Data Center Infrastructure Services and Technology Services in HP Enterprise Business.

Here are some excerpts:
Tang: What the world is demanding is essentially instant gratification. You can call it sort of an instant-on world, a world where everything is mobile, everybody is connected, interactive, and things just move very immediately and fluidly. All your customers and constituents want their need satisfied today, in an instant, as opposed to days or weeks. So, it takes a special kind of enterprise to do just that and compete in this world.

You need to be able to serve all of these customers, employees, partners, and citizens -- 0r if you happen to be a government organization -- with whatever they want or need instantly, any point, any time, through any channel. This is what HP is calling the Instant-On Enterprise, and we think it's the new imperative.

There are a lot of difficulties for technology, but also if you look at the big picture, we live in extremely exciting times. We have rapidly changing and evolving business models, new technology advances like cloud, and a rapidly changing workforce.

Architecture shifts

A Gartner stat: In the next four years, 43 percent of CIOs will have the majority of their IT infrastructure and organizations and apps running in the cloud or in some sort of software-as-a-service (SaaS) technology. Most organizations aren’t equipped to deal with that.

There’s an explosion of devices being used: smartphones, laptops, TouchPads, PDAs. According to the Gartner Group, by 2014, that’s less than three years, 90 percent of organizations will need to support their corporate applications on personal devices. Is IT ready for that? Not by a long shot today.

Last but not least, look at your workforce. In less than 10 years about half of the workforce will be millennials, which is defined as people born between the year of 1981 and 2000 -- the first generation to come of age in the new millennium. This is a Forrester statistic.

This younger generation grew up with the Internet. They work and communicate very differently from the workforce of today and they will be a main constituency for IT in less than 10 years. That’s going to force all of us to adjust to different types of support expectations, different user experiences, and governance.

Maturity is a psychological term used to indicate how a person responds to the circumstances or environment in an appropriate and adaptive manner.



Your organization is demanding ever more from IT -- more innovation, faster time to market, more services -- but at the same time, you're being constrained by older architectures, inflexible siloed infrastructure that you may have inherited over the years. How do you deliver this new level of agility and be able to meet those needs?

You have to take a transformational approach and look at things like converged infrastructure as a foundation for moving your current data center to a future state that’s able to support all of this growth, with virtualized resource pools, integrated automated processes across the data center, with an energy-efficient future-proofed physical data center design, that’s able to flex and meet these needs.

Edelmann: The IT Maturity Model approach consists of an overall assessment, and it’s a very objective assessment. It’s based on roughly 60 questions that we go through to specifically address the various dimensions, or as we call them domains, of the maturity of an IT infrastructure.

We've found it’s much more valuable to sit down face to face with the customer and go through this, and it actually requires an investment of time. There’s a lot of background information that has to be gathered and so forth, and it seems best if we're face to face as we go through this and have the discussion that’s necessary to really tease out all the details.

We apply these questions in a consultative, interactive way with our customers, because some of the discussions can get very, very detailed. Asking these questions of many of our customers that have participated in these workshops has been a new experience. We're going to ask our customers things that they probably never thought about before or have only thought of in a very brief sort of a way, but it’s important to get to the bottom of some of these issues.

From that, as we go through this, through some very detailed analysis that we have done over the years, we're able to position the customer’s infrastructure in one of five stages:
  • The first stage, which is where most people start, is in Stage 1; we call that Compartmentalized and Legacy, which is rather essentially the least-mature stage.
  • From there we move to Stage 2, which we call Standardized.
  • Stage 3 then is Optimized.
  • Stage 4 gets us into Automated and a Service-Oriented Architecture (SOA), and,
  • Stage 5 is more or less IT utopia necessary to become the Instant-On Enterprise that Helen just talked about. We called that Adaptively Sourced Infrastructure.
We evaluate each domain under several conditions against those five stages and we essentially wind up with a baseline of where the customer stands.

As a result of examining the infrastructure’s maturity along these lines, we're able to establish a baseline of the maturity of the infrastructure today. And, in the course of interviewing and discussing this with our customers, we also identify where they would like to be in terms of their maturity in the future. From that, we can put together a plan of how to get from here to there.

Even further behind

M
ost of our customers find out that they are a lot further behind than they thought they were. It's not necessarily due to any fault on their part, but possibly a result of aging infrastructure, because of the economic situation we have been in, disparate siloed infrastructure as a result of building out application focused stacks, which was kind of the way we approached IT historically.

Also, the impact of mergers and acquisitions has kind of forced some customers to put together different technologies, different platforms, using different vendors and so forth. Rationalizing all that can leave them in kind of a disparate sort of a state. So, they usually find that they are a lot further behind than they thought.

We've been doing this for a while and we've done a lot of examinations across the world and across various industries. We have a database of roughly 1,400 customers that we then compare the customer’s maturity to. So, the customer can determine where they stand with regards to the overall norms of IT infrastructures.

It's a difficult and a long journey to get to that level, but there are ways to get there, and that’s what we're here for.



We can also illustrate to the customer what the best-in-class behavior is, because right now, there aren’t a whole lot of infrastructures that are up at Stage 5. It's a difficult and a long journey to get to that level, but there are ways to get there, and that’s what we're here for.

Grindle: This process can also be structured if you do the Data Center Transformation Experience Workshop first and then follow that up with the Maturity Model.

The DCT workshop was originally designed and set up based on HP IT’s internal transformation. It's not theoretical and it's also extremely interactive. So, it's based on exactly what we went through to accomplish all the great things that we did, and we've continued to refine and improve it based on our customer experiences too. So, it's a great representation of our internal experiences as well as what customers and other businesses and other industries are going through.

During the process, we walk the customer through everything that we've learned, a lot of best practices, a lot of our experiences, and it's extremely interactive.

Then, as we go through each one of our dimensions, or each one of the panels, we probe with the customer to discuss what resonates well with them, where they think they are in certain areas, and it's a very interactive dialog of what we've learned and know and what they've learned and know and what they want to achieve.

The outcome is typically a very robust document and conversation around how the customer should proceed with their own transformation, how they should sequence it, what their priorities are, and true deliverables -- here are the tasks you need to take on and accomplish -- either with our help or on their own.

It’s a great way of developing a roadmap, a strategy, and an initial plan on how to go forward with their own transformational efforts.



Designed around strategy

It's definitely designed around strategy. Most people, when they look at transformation, think about their data centers, their servers, and somewhat their storage, but really the goal of our workshop is to help them understand, in a much more holistic view, that it's not just about that typical infrastructure. It has to do with program management, governance, the dramatic organizational change that goes on if you go through transformation.

Applications, the data, the business outcomes, all of this has to be tied in to to ensure that, at end of the day, you've implemented a very cost-effective solution that meets the needs of the businesses. That really is a game-changing type of move by your organization.

The financial elements are absolutely critical. There are very few businesses today that aren’t extremely focused on their bottom line and how they can reduce the operational cost.

Certainly, from the HP IT experience, we can show, although it's not a trivial investment to make this all happen, the returns are not only normally a lot larger than your investment, but they are year-over-year savings. That’s money that typically can be redeployed to areas that really impact the business, whether it's through manufacturing, marketing, or sales. This is money that can be reinvested in the business, and allowed to help grow the areas that really will have future impact on the growth of the business, while reducing the cost of your data centers and your operation.

Even though you're driving down the cost of your IT organization, you're not giving up quality and you are not giving up technology.



Interestingly enough, what we find is that, even though you're driving down the cost of your IT organization, you're not giving up quality and you are not giving up technology. You actually have to implement new technologies and robust technologies to help bring your cost down. Things like automations, operational efficiency, ITIL processes all help you drive the saving while you are allowed to upgrade your systems and your environments to current technologies and new technologies.

And, while we're on the topic of cost savings, a lot of times when we are talking to customer about transformation, it's normally being driven by some critical IT imperative, like they're out of space in their data center and they're about to look at building out a new data center or perhaps a obtaining a collocation site. A lot of times we find that we sit down and talk with them about how they can modernize their application, tier their storage, go with higher density equipment, virtualize their servers, they actually can free up space and avoid that major investment of the new data center.

I am working with a company right now that was looking at going to eight data centers and by implementing a lot of these new technologies -- higher virtualization rates, improvements to their applications, and better management of their data on their storage. We're trying to get them down into two data centers. So right there is a substantial change. And, that’s just an example of things that I have seen time and time again, as we've done these workshops.

It's all about walking through the problems and the issues that are at hand and figuring out what the right answers are to meet their needs, while trying to control the expense.

Tang: Both workshops are great. It's not really an either/or. I would start with the Data Center Transformation Experience Workshop, because that sets the scene in the background of how I start to approach this problem. What do I think about? What are the key areas of consideration? And, it maps out a strategy on a grander scale.

The CI Maturity Model Assessment specifically gets into when you think about implementation. Let's dive in and really drill deep into your current state versus future state when it comes to the five domains.

You say, "Okay, what would be the first step?" A lot of times, it makes sense to standardize, consolidate. Then, what is the next step? Sometimes that’s modernizing applications, and so on. That’s one approach we have seen.

In the more transformational approach, whereby you have the highest level of buy-in, all the way up to the CIO and sometimes CFO and CEO, you lay out an actual 12-18 month plan. HP can help with that, and you start executing toward that.

A lot of organizations don’t have the luxury of going top-down and doing the big bang transformation. Then, we take that more project-based approach. It still helps them a lot going through these two workshops. They get to see the big picture and all the things that are possible, but they start picking low-hanging fruit that would yield the highest ROI and solve their current pain points.

A lot of organizations don’t have the luxury of going top-down and doing the big bang transformation.



Edelmann: Often, the journey is a little bit different from one customer to the other.

The Maturity Model Workshop you might think of as being at a little lower level than the Data Center Transformation Workshop. As a result of the Maturity Model Workshops, we produce a report for the customer to understand -- A is where I'm at, and B is where I'm headed. Those gaps that are identified during the course of the assessment help lead a customer to project definitions.

In some cases, there may be some obvious things that can be done in the short term and capture some of that low-hanging fruit -- perhaps just implement a blade system or something like that -- that will give them immediate results on the path to higher maturity in their transformation journey.

Multiple starting points

There are multiple starting points and consequently multiple exit points from the Maturity Model Workshop as well.

Grindle: The result of the workshop is really a sequence series of events that the customer should follow up on next. Those can be very specific items, like gather your physical server inventories so that that can be analyzed, to other items such as run a Maturity Model Workshop, so that you can understand where you are in each of the areas and what the gaps are, based on where you really want to be.

It’s always interesting when we do these workshops, because we pull together a group of senior executives covering all the domains that I've talked about -- program management, governance -- their infrastructure people, their technology people, their applications people, and their operational people, and it’s always funny, the different results we see.

I had one customer that said to me that the deliverable we gave them out in the workshop was almost anti-climatic versus what they learned in the workshop. What they had learned during this one was that many people had different views of where the organization was and where it wanted to go.

It’s a great learning collaborative event that brings together a lot of the thoughts on where they want to head.



Each was correct from their particular discipline, but from an overarching view of what are we trying to do for the business, they weren’t all together on all of that. It’s funny how we see those lights go on as people are talking and you get these interesting dialogs of people saying, "Well, this is how that is." And someone else going, "No, it’s not. It’s really like this."

It’s amazing the collaboration that goes on just among the customer representatives above and beyond the customer with HP. It’s a great learning collaborative event that brings together a lot of the thoughts on where they want to head. It ends up motivating people to start taking those next actions and figuring out how they can move their data centers and their IT environment in a much more logical, and in most cases, aggressive fashion than they were originally thinking.

The place to learn more would be hp.com/go/dct. To learn more about the CI Maturity Model, you can go to hp.com/go/cimm.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

You may also be interested in:

Tuesday, April 12, 2011

HP application transformation news responds to rapid shifts in how apps are managed, hosted, perceived

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

HP made a series of announcements April 12 on application transformation. In advance of the news, BriefingsDirect met with an HP application transformation expert to dig into some new research and to better understand HP’s response to the fast-moving trends supporting the rationale for application transformation.

These same trends are pointing to a deeper payoff from the well-managed embrace of hybrid computing models. But applications also have to be delivered more securely, even in these hybrid implementations, while the new delivery models also mean adding automation and governance features across the entire service lifecycle. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The new research describes how top level enterprise executives are reacting to these fast-moving trends, buffeting nearly all global businesses. HP has delivered some new products and services designed to help companies move safely, yet directly, to transform their applications, improve their hosting options, and free up resources that can be used to provide the innovation needed to support better business processes. It's and the support of business processes, after all, that’s the real goal of these modernization activities.

And it was on this note that we welcomed Paul Evans, Worldwide Lead for Application Transformation for HP Enterprise Business. The discussion was moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Evans: We see three mega trends, and we validate this with customers. We haven’t just made these up. And, the three mega trends really come down to: One, people are evolving their business models.

When you get recessionary periods, hyper growth in particular markets, and the injection of new technologies, people look at how to make money and how to save money. They look at their business model and see they can make a change there. Of course, if you change the business model, then that means you change the business process. If you change the business process, the digital expression of a business process is an application. So, people need to change their apps.

So, you change your model and the process and need to change your app, because for most people now, the app is pretty much the digital expression of their business. For many of us, when we go online or do some form of transaction, at the end of the day, it’s an app that is authenticating this, validating the transaction, making the transaction, whatever it may be. That’s one mega trend we see happening.

The second mega trend is that technology innovation just keeps on going, whether it’s the infusion of cloud architectures that people are looking towards, or the whole mega trend around mobile connectivity. That is a game changer in their mind. It’s a radical transformational time for applications, as they accommodate and exploit those technologies.

No precedent

Some people just accommodate them and say, "Okay, we can do things better, maybe less expensively. We can be more innovative, more flexible in this way, or maybe we can do things differently. Maybe we can do things like we have never ever done them before."

I don’t believe there's any precedent for the mobile evolution that we're going to see coming towards us through smartphones, pads, or whatever it may be.

We can't look back over our shoulder and say, "What we did five years ago we'll just do that again, and it will be wonderful." I don’t think there is any precedent here. There is an opportunity for people to do some really innovative things.

Third, it’s the whole nature of the changing workforce. The expectations of people that are joining with the community every day on the net is very different from the people at the other end of the spectrum and their experience.

When we look at young people joining the net and when we look at young people coming into the workforce, their expectation is very high in terms of what they want, what they need, and what they would like to achieve. This is in terms of the tools they utilize, whether it’s social networking, whether it’s just the fact that their view is that they are sort of always on the network, whether it’s through their mobile or whether it’s through their notebook or whatever device they use.

When we look at young people joining the net and when we look at young people coming into the workforce, their expectation is very high.



They're always on, and therefore the expectations of those people who are going to be with us now for the next 60-70 years is starting from a position of, we have always known the web, we have always liked the web, we have always had the web. So their view is, we just want to see more of it and better. We want to see things as services rather than processes. The expectation of those people is also having a lot of effect. Those three mega trends affect the way that organizations have to respond.

Fundamental audience

So we actually went to the C-suite -- the CEO, CIO, and CFO -- and just tried to understand from them how they see things, too, because they are clearly a fundamental audience that we need to work with and understand their opinions and how their opinions have changed.

Two or three years ago, during the heavy economic times, cost was all it was all about. Take cost out. Take cost out. Don’t worry about the functionality; I need to take cost out. Now, that’s changed. We've seen, both from the public and the private sector, the view that we've got to be innovative. Innovation is going to be the way we keep ourselves different, keep ourselves alive the way we move forward.

A business requirement is that we need to innovate. If we stand still, we're probably going backwards. I know that sounds ridiculous, but you have do more than just keep up to speed. You've got to accelerate. And, we asked the C-suite if innovation therefore is important.

Some 95 percent of the people we talked to said innovation is key to the success of the organization. As I said, that was both public and private. Of course, the private sector would, but why would the public sector, because they don't have any competition? But, they are serving citizens who have expectations and want the same level of service that we see from a private organization in the public domain.

So, one, the audience said to us that innovation is key. Two, we didn’t see any massive difference between public and private. Then, we asked them how they relate innovation and technology. Basically, they told us that technology is the innovation engine. It is the thing that makes them innovative. They're going to have new products and new services, but whether the technology is involved in the front end or the back end of that, it’s involved. It’s not an administration function anymore. It's the life blood of what they do.

They told us that technology is the innovation engine. It is the thing that makes them innovative.



So it's not HP saying this. It's our customer saying to us that technology would be the engine that they will use to be innovative going forward. We told them, "Well, technology is a big thing. Are are we talking about mobiles? Are we talking about blade servers? What do you see?

Applications and software that derive more flexible process was the number one area where they would invest first, across all the audiences. So, their view was that they know there are lots of pieces for technology, but if they want to innovate, they see that applications and software is the vehicle that gets them there.

Changing definition of 'application'

The whole expectation around the application is changing, and I think it’s irreversible. We're not going to go backward. We're going to keep on driving forward, because people like HP and others see the real value here. We're going to start to have a different approach to apps. It’s going to be more component driven and it’s not going to be monolithic.

We have to go away from the monolithic app anyway, because it’s not a flexible device. It's not something that easily delivers innovation and agility. People have already understood that the cost of maintaining those monolithic, legacy applications is not acceptable.

We're going to get far more sophisticated in how we do those things, and they'll be tailored to this whole notion of context awareness. So, they'll understand where they are and what they're doing. Things will change by virtue of the context of the person, where they're based or what device they are using.

I really get excited by the fact we're just starting down that road, and there is a lot of good stuff more to come.

If you're looking at core applications, something that is fundamental to your business, they're not so easy to just move around.



You can look at an on-premise supply, you can look at off-premise, you can look at outsourcing or out-tasking, or you can look to the cloud. There are a lot more choices available to people who maybe could lower the cost, and that has a direct impact on the bottom line.

But, if you're looking at core applications, something that is fundamental to your business, they're not so easy to just move around. The CIO looks at those and say, "I’ve got this massive investment. What do I do?" Then, he swings around and sees the world of cloud and mobile heading towards them and says, "Now I'm challenged, because the CFO or CEO is telling me I need performance improvement, if I need to get into these new markets whatever it maybe."

At the same time, they needs to cut cost, be really innovative, and explore all these new technologies. He wants to understand what he's going to do with the old ones, which may take money and funding to achieve. At the same time, he wants to exploit and be innovative with the new. That’s a very difficult position to sit in the middle of and not feel the stretches and strains.

We sit with the CEOs on their side of the table and try and understand the balance of what business is looking to achieve, whether that would be improvement in product delivery or marketing and customer satisfaction. The things that people look to a technology group for and say, "Our website experience is losing its market share. Do something about it," that’s in the CIO’s regime. He looks around the other way and says, "But, I have got all these line of business guys that also want me to keep on making product or making whatever and I need to understand what I do with legacy."

So, we sit on their side of the table and say let's make a list, let's prioritize, let's understand some of the fundamentals of good business and your technology and come up with a list of actionable items. You got to have a plan that is not 12 months, because this is not a 12-month thing.

Building for the future

Anyone who's been keeping their eyes on HP for a while would have seen some significant investments, especially in the software area,, and this preceded the research where customers are telling us that apps and software are pretty important.

The investments in companies like ArcSight and Fortify have been there because, as they say in ice hockey terms, we're trying to predict where the puck is going to go, and we're trying to move toward where the puck will be, as opposed to where it is now.

We've been investing in acquisitions, but also investing in internal R&D, looking at the customer’s environment to see what things are really top of mind.



We've been investing in acquisitions, but also investing in internal R&D, looking at the customer’s environment to see what things are really top of mind. Effectively, we know this change is irreversible. The technology industry, whether you like it or not, never goes backward.

As I heard on a television program, we are compelled to travel into the future. It’s not being corny. That’s what we're doing. We're looking at this, so the new range of products and services that we're bringing out are around several of those core areas.

One, is that people need to get a real good handle on what they've got. A lot of CIOs we meet and a lot of people we talk to the IT function will openly admit that they have a no clear idea what their portfolio looks like. They don’t know how much it’s costing them. They don’t know what the components are. They don’t know how well they're aligned for the business.

They don’t know what sort of technology underpinnings they've got and what sort of security level they're implementing. That sounds like a pretty terrible picture, but unfortunately it’s pretty much reality. There are definite clients we meet who do know, but they're pretty rare.

So you’ve to get your head around that first, because if you don't know what you’ve got, then how the hell can you move forward? So, we've invested a lot in Application Portfolio Management, a new software product, combined that with a whole portfolio of services to exploit it, which really gives people a very rich graphical environment and the ability to understand the portfolio and make decisions.

This whole notion of where we've been in the past -- service-oriented architecture (SOA) and shared services -- is a real underpinning. Some people think SOA died. SOA did not die. It's actually one of the technological underpinnings for going forward in creating these shared services which we're going to be calling a cloud environment.

We tell people we can help them understand which apps are fit to go to the cloud and should go to the cloud. This is how we get them to the cloud. By the way, we'll also tell you the ones that shouldn't.

We get that question a lot. Of course, when you talk cloud, you invariably get people talking about the biggest excuse not to go to cloud, which is that it's not secure.

Unfortunately, there are unscrupulous people who know their way around certain bolt-ons, and have a way of infiltrating.



As I said, we're into irreversible change. We know there may be challenges, which is why the acquisition of companies like ArcSight and Fortify, and what we have brought out recently with the application securities in the product have really changed the rules on security, not to view this as a bolt on.

Anybody that is familiar with the notion of a stack knows we go from hardware at the bottom to application at the top with all the intermediate layers. We could bolt on a security enhancement to a piece of the stack with the view that we’ll stop you coming in.

Unfortunately, as you are aware, there are unscrupulous people who know their way around certain bolt-ons, and have a way of infiltrating. From reports in the press, it’s very clear about what can happen when they do. We've taken is a totally different approach.

Make security something that is inherent within the whole process. So that once you are through the gatekeeper, you can't just have a lot of fun and games inside the code. Once you are in, you're not going to get very far. Also, monitor this in real-time. Don't make this a static process, make it a dynamic process, so that you can dynamically see vulnerabilities and react to those in real-time.

Hybrid delivery

People are coming to us and saying that they have some productivity applications that maybe they shouldn't be running in an extremely expensive environment. We see a lot of people who run an app on a mainframe. We ask why, and the user responds because they always have. Maybe it's time that it didn’t.

There is a new option, this whole notion of hybrid delivery with the cloud, and looking at different models to deliver things. If you're short of cash and trying to be innovative, why would you want to spend a whole truck of cash on something that you don't need to? Go and spend it on something you should.

We need to help people understand how they can migrate their productivity up. Microsoft Exchange is a good example. Big productivity -- messaging is a productivity. Yes, it helps people do what they do every day.

If I'm running Exchange, I can move this to a private cloud environment, still within my firewall. The biggest challenge everybody faces is . how do you provision for it? How much infrastructure do I need to give people the response they are looking for?

The point is how to separate environments that can smooth those peaks and troughs. We believe exchange services for private cloud is the way to do that.



Now, everyone runs out of processing power and everyone runs out of storage. I do every day, especially storage. But, the point is how to separate environments that can smooth those peaks and troughs. We believe exchange services for private cloud is the way to do that.

The flip side is that people that are using the Microsoft Dynamics customer relationship management (CRM) package. Maybe they don’t want to be in the CRM business. They want to build relationships with customers, want to understand who they are and
what they are. Maybe they don’t want to be in the whole provisioning business.

So, what we're offering is what we call Enterprise Cloud Services for Microsoft Dynamics CRM, which says we will put this on our service. The customer just buys a service through the net and pays per usage. If they don’t use it, they don’t pay.

We're going to see a lot more of that style of hybrid delivery where you pay per use. What I want, I use, and I pay for. What I don’t want, I put it back. I don’t have to take any responsibility for infrastructure and storage and all the stuff that goes with it. I want to give that responsibility to someone else and get on with my core business.

It’s a SaaS model and other options. There was a model once where everyone was on premises. Then, the whole notion of outsourcing came in, and people looked at that and felt it was pretty good. So, they went to outsourcing.

We believe that this whole notion will be called "hybrid delivery." It will be a mixture of all of them -- on premises, off premised, people running services inside their firewall as private clouds. It’s actually a public provision service where it will be provisioned for them outside their firewall and then they buy what they want.

Also, one of the components of the announcement we are bringing out is what we call Cloud Service Automation, which we're extremely proud of. This is really for the people who want to get a cloud service up and running, want to do it fast, and don’t want to have to spend the next two years playing computer scientist. They want to get up, running, provisioned, and out there.

It just shows the pace of this market. We brought version one of this product out in January. In April, we're bringing out the next version with a significant level of enhancement around provisioning and manageability, and 4000 scripts embedded. So, people can just assemble things.

Back to the question you asked me earlier about the way the apps are going, this is really assembling procedures where the customer wants to do and can through a drag-and-drop environment. Some people view that as nearly impossible.

This is what we call fundamental building blocks of people that are looking to deploy a cloud environment.



Cloud Service Automation runs on the cloud system, which is enabled by BladeSystem Matrix. What that’s doing is provisioning an infrastructure, giving people the choices of network components, upgrading systems, and their virtualization environment. All of this is through drag-and-drop. It's just staring at the screen and saying they want Linux on that, HP-UX on that, Windows on that, and a VMware on that, and then drop it on.

So this is what we call fundamental building blocks of people that are looking to deploy a cloud environment. But there is some real sort of down to earth tactical things you’ve got to think about, too.

Take, for example, the client environment. We’ve talked a lot about the server, but the client world is changing at a high speed by virtue of people’s desire to use devices that are not chained to the desk anymore -- whether that’s more portable, notebook type machines, smartphones, pads or whatever. You’ve also got to take into account the fact that there are a lot of enterprise applications that you still use on traditional desktop PCs. You can't ignore those and should not.

A year after launching, about 13 percent of the Windows XP base moved to Windows Vista. So, the bulk of the market stayed with XP for whatever reason. Now,. they're saying they need to make that move, but some of these desktop apps are pretty sophisticated. This is not just simple productivity stuff. This is a part of the enterprise portfolio. Therefore, they also need to get worried about it big time and fairly quickly.

So what we’ve done for our customers is to look at their volume, their desktop environment, and come up with what apps they've got, what they do, are they useful, do they need all of them, could they get rid of some? The ones they want to move forward, do they need to change? Obviously, there are functional differences between XP and Windows 7.

We know all the gotchas. When you’ve used the special feature inside XP, we know how that will translate to Windows 7.



By virtue of our knowledge and experience we can give you a very good return on your investment because we know all of the differences. We know all the gotchas. When you’ve used the special feature inside XP, we know how that will translate to Windows 7.

We're just trying to help people see that this is really important. We have been sort of screaming and shouting for the last year or two, and we believe that people are really onto this now. HP has a role to play in pointing people in the right direction.

People just need to get to their heads around it, because we appreciate it. There are some big questions to answer. We don’t trivialize this. This is not a game. This is serious. Serious problems need serious people to respond.

A lot of this is at our hp.com/go/applicationtransformation page. There, you can then go off and explore things that will interest you.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: