Thursday, April 26, 2012

Case study: Strategic approach to disaster recovery and data lifecycle management pays off for Australia's SAI Global

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

The latest BriefingsDirect case study discussion focuses on how business standards and compliance services provider SAI Global is benefiting from a strategic view of IT enabled disaster recovery (DR).

Learn here how SAI Global has brought advanced backup and DR best practices into play for its users and customers. Examine too how this has not only provided business continuity assurance, but it has also provided beneficial data lifecycle management and virtualization efficiency improvement.

Mark Iveli, IT System Engineer at SAI Global, based in Sydney, Australia, details on how standardizing DR has helped improve many aspects of SAI Global’s business reliability. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Iveli: When we started to get into DR, we handled it from an IT point of view and it was very much like an iceberg. We looked at the technology and said, "This is what we need from a technology point of view." As we started to get further into the journey, we realized that there was so much more that we were overlooking.

We were working with the businesses to go through what they had, what they didn’t have, what we needed from them to make sure that we could deliver what they needed. Then we started to realize it was a bigger project.

The initiative for DR started about 18 months ago with our board, and it was a directive to improve the way we had been doing things. That meant a complete review of our processes and documentation.

We had a number of business units that all had different strategies for their disaster recovery, and different timings and mechanisms to report on it.

Through the use of VMware Site Recovery Manager (SRM) in the DR project, we've been able to centralize all of the DR processes, provide consistent reporting, and be able to schedule these business units to do all of their testing in parallel with each other.

So we can make a DR session, so to speak, within the business and just run through the process for them and give them their reports at the end of it.

We've installed SRM 4.1 and our installation was handled by an outsource company, VCPro. They were engaged with us to do the installation and help us get the design right from a technical point of view.

Trying to make it a daily operational activity is where the biggest challenge is, because the implementation was done in a project methodology.



Trying to make it a daily operational activity is where the biggest challenge is, because the implementation was done in a project methodology. Handing it across to the operational teams to make it a daily operation, or a daily task, is where we're seeing some challenges.

I'm a systems engineer with SAI Global, and I've been with the company for three years. When the DR project started to gather some momentum, I asked to be a significant part of the project. I got the nod and was seconded to the DR project team because of my knowledge of VMware.

That’s what my role is now -- keeping the SRM environment tuned and in line with what the business needs. That’s where we're at with SRM.

Complete review

The first 12 months of this journey so far has been all around cleaning up, getting our documentation up to spec, making sure that every business unit understood and was able to articulate their environments well. Then, we brought all that together so that we could say what’s the technology that’s going to encapsulate all of these processes and documentation to deliver what the business needs, which is our recovery point objective (RPO) and for our recovery time objective (RTO).

SAI Global is an umbrella company. We have three to four main areas of interest. The first one, which we're probably most well-known for, is our Five Ticks brand, and that’s the ASIS standards. The publication, the collection, the customization to your business is all done through our publishing section of the business.

That then flows into an assurance side of the business, which goes out and does auditing, training, and certification against the standards that we sell.

We continue to buy new companies, and part of the acquisition trail that we have been on has been to buy some compliance businesses. That’s where we provide governance risk and compliance services through the use of Board Manager, GRC Manager, Cintellate, and in the U.S., Integrity 360.

Finally, last year, we acquired a company that deals solely in property settlement, and they're quite a significant section of the business that deals a lot with banks and convincing firms in handling property settlements.

So we're a little bit diverse. All three of those business sections have their own IT requirements.

Gardner: Like many businesses, your brand is super important. The trust associated with your performance is something you will take seriously. So DR, backup and recovery, business continuity, are top-line issues for you.

Because of what we do, especially around the property settlement and interactions with the banks, DR is critical for us.



Is there anything about what you've been doing as a company that you think makes DR specifically important for you?

Iveli: From SAI Global’s point of view, because of what we do, especially around the property settlement and interactions with the banks, DR is critical for us.

Our publishing business feels that their website needs to be available five nines. When we showed them what DR is capable of doing, they really jumped on board and supported it. They put DR as high importance for them.

As far as businesses go, everyone needs to be planning for this. I read an article recently where something like 85 percent of businesses in the Asia-Pacific region don’t have a proper DR strategy in place. With the events that have happened here in Australia recently with the floods, and when you look at the New Zealand earthquakes and that sort of stuff, you wonder where the businesses are putting DR and how much importance they've got on it. It’s probably only going to take a significant event before they change their minds.

Gardner: I was intrigued, Mark, when you said what DR is capable of doing. Do you feel that there is a misperception, perhaps an under-appreciation of what DR is?

Process in place

Iveli: The larger DR whole was just that these business units had a process in place, but it was an older process and a lot of the process was designed around a physical environment.

With SAI Global being almost 100 percent virtual, moving them into a virtual space opened their minds up to what was possible. So when we can sit down with the business units and say, "We're going to do this DR test," they ask if it will impact production. No, it won’t. How is it happening? "Well, we are going to do this, this, and this in the background. And you will actually have access to your application the way it is today, it’s just going to be isolated and fenced off."

They say, "This is what we've been waiting for." We can actually do this sort of stuff. They're starting to see and ask, "Can we use this to test the next version of the applications and can we test this to kind of map out our upgrade path?"

We're starting to move now into a slightly different world, but it has been the catalyst of DR that’s enabled them to start thinking in these new ways, which they weren’t able to do before.

Gardner: So being able to completely switch over and recover with very little interruption in terms of the testing, with very little downtime or loss, the opportunity then is to say, "What else can we do with this capability?"

It has been the catalyst of DR that’s enabled them to start thinking in these new ways, which they weren’t able to do before.



Iveli: Absolutely. With this new process, we've taken the approach of baby steps, and we're just looking to get some operational maturity into the environment first, before we start to push the boundaries and do things like disaster avoidance.

Having the ability to just bring these environments across in a state that’s identical to production is eye-opening for them. Where the business wants to take it is the next challenge, and that’s probably how do we take our DR plan to version 2.0.

We need to start to work with the likes of VMware and ask what our options are now. We have this in place, people are liking it, but they want to take it into a more highly available solution. What do we do next? Use vCloud Director? Do we need to get our sites in an active/active pairing?

However, whatever the next technology step is for us, that’s where the business are now starting to think ahead. That’s nice from an alignment point of view.

Gardner: Those DR maturation approaches put you in a position to further leverage virtualization. Is there sort of a virtuous adoption pattern, when you combine modern DR with widespread virtualization?

Iveli: Because all of a sudden, your machines are just a file on a data store somewhere, now you can move these things around. As the physical technologies continue to advance -- the speed of our networks, the speed of the storage environments, metro clustering, long haul replication -- these technologies are allowing businesses to think outside of the box and look at ways in which they can provide faster recovery, higher availability, more elastic environments.

You're not pinned down to just one data center in Sydney. You could have a data center in Sydney and a data center in New Zealand, for instance, and we can keep both of those sites online and in sync. That’s couple of years down the track for our business, but that’s a possibility somehow through the use of more virtualization technology.

Gardner: Any advice for those listening in who are beginning their journey? For those folks that are recognizing the risks and seeing these larger benefits, these more strategic benefits, how would you encourage them to begin their journey, what advice might you offer?

Iveli: The advice would be to get hired guns in. With DR, you're not going to be able to do everything yourself. So spend a little bit more money and make sure that you get some consultants in like VCPro. Without these guys, we probably would have struggled a little bit just making sure that our design was right. These guys ensured that we had best practice in our designs.

Before you get into DR, do your homework. Make sure that your production environment is pristine. Clean it up. Make sure that you don’t have anything in there that’s wasting your resources.

Come around with a strong business case for DR. Make sure that you've got everybody on board and you have the support of the business.

Make sure that your production environment is pristine. Clean it up. Make sure that you don’t have anything in there that’s wasting your resources.



When you get into DR, make sure that you secure dedicated resources for it. Don't just rely on people coming in and out of the project. Make sure that you can lead people to the resource and you make sure that they are fully engaged in the design aspects and the implementation aspects.

And as you progress with DR, incorporate it as early as you can into your everyday IT operation. We're seeing that, because we held it back from our operations, just handing it over and having them manage the hardware and the ESX and the logical layers, the environment, they were struggling just to get their head around it and what was what, where should this go, where should that go.

And once it’s in place, celebrate. It can be a long haul. It can be quite a trying time. So when you finally get it done, make sure that you celebrate it.

Gardner: And perhaps a higher degree of peace of mind that goes with that.

Iveli: Well, you'll find out when you get through it, how much easier this is making your life, how much better you can sleep at night.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Monday, April 16, 2012

Virtualization simplifies disaster recovery for insurance broker Myron Steves while delivering efficiency and agility gains too

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

When Hurricane Ike struck Texas in 2008, it became the second costliest hurricane ever to make landfall in the U.S. It was also a wake-up call for Houston-based insurance wholesaler Myron Steves & Co., which was not struck directly but nonetheless realized its IT disaster recovery (DR) approach was woefully inadequate.

Supporting some 3,000 independent insurance agencies in the Gulf Coast region, with many insured properties in that active hurricane zone, Myron Steves must have all it resources up and available, if and when severe storms strike.

The next BriefingsDirect discussion then centers on how Myron Steves, a small- to medium-sized business (SMB), developed and implemented a modern disaster recovery and business continuity strategy based on a high-degree of server and clients virtualization.

Learn how Tim Moudry, Associate Director of IT, and William Chambers, IT Operations Manager, both at Myron Steves, made a bold choice to go essentially 100 percent server virtualized in 90 days. That then set the stage for a faster, cheaper, and more robust DR capability. It also helped them improve their desktop-virtualization delivery, another important aspect of maintaining constant availability no mater what.

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Moudry: When Hurricane Ike came, we were using another DR support company, and they gave us facilities to recover our data. They were also doing our backups.

We went to that site to recover systems, and we had a hard time recovering anything. We were testing it, and it was really cumbersome. We tried to get servers up and running. We stayed there to recover one whole day and never got even a data center recovered.

So William and I were chatting and thinking that there's got to be a better way. That’s when we started testing a lot of the other virtualization software. We came to VMware, and it was just so easy to deploy.

We made a proposal to our executive committee, and it was an easy sell. We did the whole project for the price of one year of our old DR system.

Gardner: William, what were your top concerns about change?

Chambers: Our top concerns were just avoiding what happened during Ike. In the building we're in in Houston, we were without power for about a week. So that was the number one cause for virtualization.

Number two was just the amount of hardware. Somebody actually called us and said, "Can you take these servers somewhere else and plug them in and make them run?" Our response was no.

That was the lead into virtualization. If we wanted everything to be mobile like that, we had to go with a different route.

Then, once you get into virtualization, you think, "Well, okay, this is going to make us mobile, and we'll be able to recover somewhere else quicker," but then you start seeing other features that you can use that would benefit what you are doing at smaller physical size. It's just the mobility of the data itself, if you’ve got storage in place that will do it for you. Recovery times were cut down to nothing.

Simpler to manage


There was ease of backups, everything that you have to do on a daily maintenance schedule. It just made everything simpler to manage, faster to manage, and so on.

Gardner: And so for you as an SMB with 200 employees, what requirements were involved? You obviously don't have unlimited resources and you don't have a huge IT staff.

Chambers: It’s probably what any other IT shop wants. They want stability, up-time, manageability, and flexibility. That’s what any IT shop would want, but we're a small shop. So we had to do that with fewer resources than some of the bigger Exxons and stuff like that.

Moudry: And it can't cost an arm and leg either. We're an insurance broker. We're not a carrier. We are between the carriers and agents. With our people being on the phone, up-time is essential, because they're on the phone quoting all the time. That means if we can’t answer our phones, the insurance agent down the street is going to go pick up the phone, and they're going to get the business somewhere else.

Now, we're trying to get more green in the industry, and we are trying to print less paper



Also, we do have claims. We don't process all claims, but we do some claims, mainly for our stuff that's on the coast. After a hurricane, that’s when people are going to want that.

We have to be up all the time. When a disaster strikes, they are going to say, "I need to get my policy," and then they are going to want to go to our website to download that policy, and we have to be up.

Gardner: Why did you go 100 percent virtualized in such a short time?

SAN storage

Chambers: We did that because we’ve got applications running on our servers, things like rating applications, emails, our core applications. A while back, we separated the data volumes from the physical server itself. So the data volume is stored on a storage area network (SAN) that we get through an iSCSI.

That made it so easy for us to do a physical-to-virtual (P2V) conversion on the physical server. Then in the evenings, during our maintenance period, we shut that physical server down and brought up the virtual connected to the SAN one, and we were good. That’s how we got through it so quickly.

Moudry: William moved us to VMware first, and then after we saw how VMware worked so well, we tried out VMware View and it was just a no-brainer, because of the issues that we had before with Citrix and because of the way Citrix works. One session affects all the others. That’s where VMware shines, because everybody is on their independent session.

Gardner: Where are your data centers?

Moving to colos


Moudry: Right now it’s Houston and San Antonio, but we are moving all of our equipment to colos, and we are going to be in Phoenix and Houston.

Gardner: So that’s even another layer of protection, wider geographic spread, and just reducing your risk in general. Let’s take a moment and look at what you’ve done and see in a bit more detail what it’s gotten for you. Return on investment (ROI), do you have any sense, having gone through this, what you are doing now that perhaps covered the cost of doing it in the first place?

Moudry: We spent about $350,000 a year in our past DR solution. We didn’t renew that, and the VMware DR paid for itself in the year.

We're working with automation. We're getting less of a footprint for our employees. You just don’t hire as many.

And we are not buying equipment like we used to. We had 70 servers and four racks. It compressed down to one rack. How many blades are we running, William?

Chambers: We're running 12 blades, and the per year maintenance cost on every server that we had compared to what we have now is 10 percent now of what it was.

Gardner: I notice that you're also a Microsoft shop. Did you look at their virtualization or DR? How come you didn’t go with Microsoft?

Then he downloaded the free version of VMware and tried the same thing on that. We got it up in two or three days.



Chambers: We looked at one of their products first. We've used the Virtual PC and Virtual Server products. Once you start looking at and evaluating theirs, it’s a little more difficult setup. It runs well, but at that time, I believe it was 2008, they didn’t have anything like the vCenter Site Recovery Manager (SRM) that I could find. It was a bit slower. All around, the product just wasn’t as good as the VMware product was.

Moudry: I remember when William was loading it. I think he spent probably about 30 days loading Microsoft and he got a couple of machines running on it. It was probably about two or three machines on each host. I thought, "Man, this is pretty cool." But then he downloaded the free version of VMware and tried the same thing on that. We got it up in two or three days?

Chambers: I think it was three days to get the host loaded and then re-center all the products, and then it was great.

Moudry: Then he said that it was a little bit more expensive, but then we weighed out all the cost of all the hardware that we were going to have to spend with Microsoft. He loaded the VMware and he put about 10 VMs on one host.

Increased performance


It was running great. It was awesome. I couldn’t believe that that we could get that much performance from one machine. You'd think that running 10 servers, you would get the most performance. I couldn’t believe that those 10 servers were running just as fast on one server that they did on 10.

Chambers: That was another key benefit. The footprint of ESXi was somewhat smaller than a Microsoft.

Moudry: It used the memory so much more efficiently.

Gardner: You mentioned vSphere, vCenter Site Recovery Manager, and View. Is that it? Are you up to the latest versions of those? What do you actually have in place and running?

Chambers: We have both in production right now, vCenter 4.1, and vCenter 5.0. We’re migrating from 4.1 to 5.0. Instead of doing the traditional in-place upgrade, we’ve got it set up to take a couple of hosts out of the production environment, build them new from scratch, and then just migrate VMs to it in the server environment.

It went by so fast that it just happened that way. We were ahead of schedule on our time-frames and ahead on all of our budget numbers.



It's the same thing with the View environment. We’ve got enough hosts so we can take a couple out, build the new environment, and then just start migrating users to it.

It all happened much quicker than we thought. Once we did a few of the conversions, of the physical servers that we had, and it went by so fast that it just happened that way. We were ahead of schedule on our time-frames and ahead on all of our budget numbers. Once we got everything in our physical production environment virtualized, then we could start building new virtual servers to replace the ones that we had converted, just for better performance.

Without disruption

We were able to do it without disruption, and that was one of the better things that happened. We could convert a physical server during the day, while people were still using it, or create that VM for it. Then, at night, we took the physical down and brought the virtual up, and they never knew it.

Gardner: How about some other metrics of success?

Copying the template

Moudry: Making new servers is nothing. William has a template. He just copies it and renames it.

Chambers: The deployment of new ones is 20 minutes. Then, we’ve got our development people who come down and say, "I need a server just like the production server to do some testing on before we move that into production." That takes 10 minutes. All I have to do is clone that production server and set it up for them to use for development. It’s so fast and easy that they can get their work done much quicker.

Moudry: Rather than loading the Windows disk and having to load a server and get it all patched up.

Chambers: It gives you a like environment. In the past, where they tested on a test server you built, that’s not exactly the same as the production server. They could have bugs that they didn’t even know about yet, and that just cuts down on the development time just a lot.

Gardner: Any advice for folks who are looking at the same type of direction, higher virtualization, gaining the benefits of DR’s result and then perhaps having more of that agility and flexibility? What might you have learned in hindsight that you could share with some other folks?

We’ve got a lot of people working at home now, just because of the View environment and things like that.



Chambers: If you are going to use virtualization, then get in and start using it on a small basis. Just to do a proof of concept, check performance, do all the due diligence that you need, and get into it. It will really pay off in the end.

Moudry: Have a change control system that monitors what you change. When we first went over there, William was testing out the VMs, and I couldn’t believe, as I was saying earlier, how fast it is. We have people who are on the phones. They're quoting insurance. They have to have the speed. If it hesitates, and that customer on the phone takes longer to give our people the information and our people has hard time quoting it, we’re going to lose the business.

When William put some of these packages over to the VM software, and it was not only running as fast, but it was running faster on the VM than it was on a hard box. I couldn’t believe it. I couldn’t believe how fast it was.

Chambers: And there was another thing that we saw. We’ve got a lot of people working at home now, just because of the View environment and things like that. I think we’ve kind of neglected our inside people, because they'd rather work in a View environment, because it's so much faster than sitting on a local desktop.

Backbone speed

Moudry: When somebody works at home, they're at lightning speeds. Upstairs is a ghost town now, because everybody wants to work from home. That’s part of our DR also. The model is, "We have a disaster here. You go work from home." That means we don’t have to put people into offices anywhere, and with the Voice over IP, it's like their call-center. They just call from home.

Chambers: They can work from different devices now, too. I know we’ve got laptops out there, iPads, different type of mobile devices, and it's all secure.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Microsoft teams up with Ariba on B2B ecommerce front

Ariba is now teaming with Microsoft in business streamlining to empower buyers and sellers to better connect and collaborate across Microsoft applications and Ariba's commerce cloud services.

Announced last week at the AribaLIVE conference in Las Vegas, the joint effort paves the way for many more businesses and resellers globally to plug into what Ariba calls the Networked Economy by giving Microsoft Dynamics AX users automated access to the Ariba Network.

Microsoft Dynamics offers productivity tools and built-in contextual business intelligence that help decision-makers move faster. There are 300,000 businesses that use Microsoft Dynamics applications and 10,000 Microsoft Dynamics reselling partners worldwide.

The Ariba Network leverages cloud-based invoicing, supplier discovery and spend management services and an online trading community to drive collaboration and efficiency in business-to-business ecommerce. Companies use the network to transact more than $300 billion in commerce annually, and it's growing rapidly. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

HP, for example, now uses Ariba to sell $1.3 billion in orders over the Ariba Network annually, changing game of automation for IT orders and fulfillment, said Ariba President Kevin Costello on the Ariba main stage last week. Ariba, said Costello, provides a "neutral gateway" to extended enterprise business processes around supply chain and spend. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The new strategic alliance wit Microsoft is a response to trends like cloud computing and the convergence of enterprise applications, social media and communities. Microsoft and Ariba are seeing more organizations looking beyond the four walls of the enterprise, extending processes and systems to connect and collaborate more with customers, suppliers, and other trading partners.

“Business has officially entered a new era,” said Tim Minahan, Chief Marketing Officer at Ariba. “It’s social, it’s mobile, it’s collaborative and it’s creating a major shift in how companies interact.”

In other AribaLIVE news, Ariba announced stronger ties to ThomasNet (nee Thomas Register), bringing scads more product and detailed supplier data into Ariba Network. Ariba also now using Dell Boomi to accelerate integration with company systems for collaborative commerce, the company announced. And Ariba is also partnering with Accenture on commerce strategies and business process outsourcing (BPO) alliances.

Joins other partners

For it's part, Redmond is bringing Microsoft Dynamics AX to the Ariba commerce cloud integration table, joining other major customer relationship management (CRM) and enterprise resource planning (ERP) online and hybrid providers, such as Salesforce.com, SAP and Oracle. Ariba is confident that the broad-based business applications integration and automation will set the stage for its network to gain critical mass and become a de facto standard for supply chain transaction and business collaboration discovery services.

“By combining the powerful capabilities of Microsoft Dynamics AX with the world’s largest business trading network, we can deliver a solution that enables companies of all sizes to connect with their trading partners electronically, helping businesses improve collaboration, grow their business with existing customers and discover new opportunities,” said Doug Kennedy, vice president of Partners and Existing Customer Service Programs, Microsoft Dynamics.

For Ariba’s part, the company is developing an adapter that will allow Microsoft Dynamics AX customers to connect to the Ariba Network. The Ariba Network offers cloud-based apps that allow organizations that share a business process to also share the technology that drives it. The network also offers a community of partners, as well as best practices in community-derived intelligence in areas like unique analytics, preferred financing and ratings.

With Microsoft joining the Ariba Network partnership gaggle, Ariba now has all major CRM and ERP providers tied into its collaborative commerce cloud. I would also definitely expect more Microsoft applications synergies with Ariba.

Updated P2P

At LIVE, Ariba also updated its Ariba Procure-to-Pay offerings, allowing users to create and deploy easy-to-search and access catalogs through which employees can find the goods and services they need and purchase them in compliance with preferred vendor agreements. The user experience looks and feels very much like Amazon.com. But it’s clearly more than just a slick interface.

The new catalog search and comparison capabilities in Ariba Procure-to-Pay certainly make it easier for buyers to find precisely the products they’re looking for, and also secure the best deals available. But the larger value comes with the budget monitoring and visual workflow features which allow all permissioned stakeholders to see where requests stand, and to be able to adjust processes on the fly to suit dynamic business needs. What's more, the expanded set of tools helps drive compliance with specific corporate purchasing policies.

These are building blocks to the larger networked effect or faster, automated and scaleable business transactions across all types of suppliers, users and types of business. And the net effect of that is to change business substantially.

"The Networked Economy effect is far more transformative than we can imagine," said Vivek Kundra, former US CIO, and currently executive vice president at Salesforce.com, an AribaLIVE keynote speaker. Hard to argue with that, based on Ariba's growth and user adoption.

You may also be interested in:

The business aspects of cloud: Let's get started

This guest post comes courtesy of Christian Verstraete, Chief Technologist, Cloud Strategy for HP.

By Christian Verstraete

I’ve spent the last several weeks addressing some of the business aspects of cloud and why/how companies move to the cloud. It’s time now to wrap this series up. The cloud discussions have been changing rapidly over the last months, focusing away from infrastructure to applications, services and industry requirements.

Implementations in larger companies typically started with development & test activities within the IT department, while business teams used “shadow-IT” approaches to source services from external parties, potentially putting the enterprise at risk. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The main reason is the perceived lack of responsiveness and agility of IT. As I pointed out, it is increasingly becoming clear that one size does not fit all, in cloud computing and that the CIO should become a “strategic service broker”, sourcing services from a series of cloud environments going from private to public clouds and from IaaS to SaaS services.

To achieve a successful transition to cloud computing, the CIO needs to address three areas:

Review the IT organization

Moving away from an architecture focused on the development, maintenance and management of a series of applications running in a proprietary environment to the sourcing of services from multiple sources, require rethinking how the organization is structured and managing the changes that this imply. Gone are the deep technical siloes, each managed in isolation.

They should be replaced by a more holistic approach where a team is focused on the implementation and operation of a cloud platform, if the enterprise decides to maintain its own private cloud, and teams focused on the sourcing, development, maintenance and management of the services needed by the business. What makes this transformation difficult is the fact the move to cloud takes time, and for the foreseeable future, legacy and cloud environments will have to co-exist.

What makes this transformation difficult is the fact the move to cloud takes time, and for the foreseeable future, legacy and cloud environments will have to co-exist.



This pushes to a gradual evolution of the IT organization, building a cloud focused team in parallel with the reduction of the traditional one. One advantage is that in IT many resources will retire over the next 10 years as the baby boomers are slowly replaced by millennials. One approach is to build the new organization around these, while keeping the baby boomers focused on the traditional environment.

But you may want to transfer some of the experience gained over the years, and even if cloud is a different approach to IT, a lot of the fundamentals are still applicable. This will force you to review your existing IT organization thoroughly, assess the capabilities and rebuild a new organization capable of addressing both the traditional and cloud world.

The good news is that others have done this before you. A couple weeks ago I ran into a podcast by Teri Takai, CIO US Department of Defense and Suren Gupta EVP of IT at Allstate Insurance Company. They discuss how to transform traditional IT to cloud IT and provide some interesting hints. This is just one of the many articles pointing out something has to happen. Recognizing the needs is easy, but transforming in flight is a little more complex.

Set-up service governance with the business

As stated in part 1, cloud is a vehicle for IT to respond faster to the needs of the business. To use cloud to its full extent, it’s key to understand those needs, isn’t it? And that is where governance comes in. Sitting down with the business and prioritize their requirements is critical for the CIO to be successful. I cannot stress enough the prioritization as in many situations, the needs of the business vastly surpasses the financial capabilities of IT to implement.

I’ve often used ROI as a way to help the business to prioritize their requirements. What is the return of investment of a specific service required. I cannot tell you how often I have heard about needs that were absolutely mandatory, but when discussing them without emotions and reviewing the added value to the bottom line or the increase in productivity, it quickly became clear there was none.

It also allows IT to demonstrate added value to the business and builds a true partnership spirit between both parties.

It helps the business teams looking at things objectively and maximize their productivity. It also allows IT to demonstrate added value to the business and builds a true partnership spirit between both parties.

In organizations, IT is often seen as an entity that does not really understand what is needed, that makes things complicated, that always needs a long time to deliver etc. Building such governance provides the business teams with a better understanding of what IT is up to and helps them decide what is really important for them.

Develop an application roadmap

The third element the CIO needs to focus on is the development of an application roadmap. What do I mean by that? It’s the definition of which applications will be retained in the transformation and what platform (legacy, private, virtual private or public cloud) is it intended to migrate too.

To perform this, a number of steps are required. Here are the main ones:

  1. Perform an inventory of the available applications. This step alone will provide you with many surprises. Don’t limit you to applications, but look at application instances, and if packaged applications are included, identify the different versions used.
  2. Establish the applications or application instances you will sunset. In other words, what are the applications you do not plan to use any longer. Here obviously the governance is mandatory as this is a discussion between business and IT. Don’t hesitate to utilize the ROI approach I described earlier to focus attention, as the business by default sees the need to keep everything.
  3. For each of the sunset applications, define a replacement and sunset plan. In other words, how will this functionality be delivered in the future (by another application that is already available, by a new application, is it no longer needed etc.) As a result of this exercise, new applications may have to be added to the inventory as they will become part of the application environment in the future.

    This is a new technology with which you do not have a lot of experience yet, so you will run into roadblocks that will take time to resolve.


  4. For each of the applications in the inventory, identify the data sources required, identify potential latency and responsiveness issues and look at whether this application is a core or context application as described in part 4.
  5. Identify the sensitivity of the data sources. Are these core data items? Are they subject to privacy or other laws enforcing geographical boundaries etc.
  6. With all this information, run a workshop with the business to review where each application will run. What is the target platform? You may want to use the approach taken by ACME corporation in part 7. Look at the characteristics of the application gathered in step 4, but also at the associated data sources and their sensitivity in the identification of the target platform
  7. And then last, set-up a plan. By when should each of the applications been migrated to their target platform. Don’t be too optimistic in the first steps. This is a new technology with which you do not have a lot of experience yet, so you will run into roadblocks that will take time to resolve.
This close collaboration with the business should transform relations in the long run, improve IT’s responsiveness and provide an environment for growth.

Conclusion

C
loud is a game changer. It is probably the first “revolution” in IT since the appearance of the mainframes and forces IT to rethink and transform itself. SMBs and start-ups have understood this quickly as it allowed them to do a quantum leap forward in the use of IT in general and infrastructure in particular. Larger enterprises, having well-structured IT departments, have a little more difficulty in understanding the value and making the step.

Having talked to many CIO’s and business people, I do not believe it’s about whether to go to cloud, but when. And the first movers get the greatest benefits.

I do not believe it’s about whether to go to cloud, but when.



Let me finish this series with a little story. It may not seem relevant at the start, but read till the end, you’ll understand.

Two people are walking in the savanna and suddenly one of them spots a tiger. Unfortunately, the tiger has seen them too. He warns his colleague who kneels down to put his running shoes on. The first guy burst laughing saying: “Those shoes won’t help you running faster than the tiger, you know.” The second responds: “I don’t need to run faster than the tiger, I just need to run faster than you.”

Improving agility and responsiveness faster than their competitors allow companies to gain market share, even in depressed markets. That’s all I wish you. I hope this series helped you think through this and understand how you can use cloud as a way to beat your competition, not the tiger.

This guest post comes courtesy of Christian Verstraete, Chief Technologist, Cloud Strategy for HP.

You may also be interested in:

Thursday, April 12, 2012

SAP and databases no longer an oxymoron

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

In its rise to leadership of the ERP market, SAP shrewdly placed bounds around its strategy: it would stick to its knitting on applications and rely on partnerships with systems integrators to get critical mass implementation across the Global 2000. When it came to architecture, SAP left no doubt of its ambitions to own the application tier, while leaving the data tier to the kindness of strangers (or in Oracle’s case, the estranged).

Times change in more ways than one – and one of those ways is in the data tier. The headlines of SAP acquiring Sybase (for its mobile assets, primarily) and subsequent emergence of HANA, its new in-memory data platform, placed SAP in the database market. And so it was that at an analyst meeting last December, SAP made the audacious declaration that it wanted to become the #2 database player by 2015.

Times change in more ways than one – and one of those ways is in the data tier.



Of course, none of this occurs in a vacuum. SAP’s declaration to become a front-line player in the database market threatens to destabilize existing relationships with Microsoft and IBM as longtime SAP observer Dennis Howlett commented in a ZDNet post. OK, sure, SAP is sick of leaving money on the table to Oracle. But if the database is the thing, to meet its stretch goals, says Howlett, SAP and Sybase would have to grow that part of the business by a cool 6x – 7x.

But SAP would be treading down a ridiculous path if it were just trying to become a big player in the database market for the heck of it. Fortuitously, during SAP’s press conference on announcements of their new mobile and database strategies, chief architect Vishal Sikka tamped down the #2 aspirations as that’s really not the point – it’s the apps that count, and increasingly, it’s the database that makes the apps. Once again.

Main point

Back to our main point, IT innovation goes in waves; during emergence of client/server, innovation focused on database where the need was mastering SQL and relational table structures; during the latter stages of client/server and subsequent waves of Webs 1.0 and 2.0, activity shifted to the app tier, which grew more distributed.

With emergence of Big Data and Fast Data, energy shifted back to the data tier given the efficiencies of processing data big or fast inside the data store itself. Not surprisingly, when you hear SAP speak about HANA, they describe an ability to perform more complex analytic problems or compound operational transactions. It’s no coincidence that SAP now states that it’s in the database business.

So how will SAP execute its new database strategy? Given the hype over HANA, how does SAP convince Sybase ASE, IQ, and SQL Anywhere customers that they’re not headed down a dead end street?

That was the point of the SAP announcements, which in the press release, stated the near term roadmap but shed little light on SAP would get there. Specifically, the announcements were:
  • SAP HANA is now going GA and at the low (SMB) end come out with aggressive pricing: roughly $3000 for SAP BusinessOne on HANA; $40,000 for HANA Edge.

    It’s no coincidence that SAP now states that it’s in the database business.


  • Ending a 15-year saga, SAP will finally port its ERP applications to Sybase ASE, with tentative target date of year end. HANA will play a supporting role as the real-time reporting adjunct platform for ASE customers.
  • Sybase SQL Anywhere would be positioned as the mobile front end database atop HANA, supporting real-time mobile applications.
  • Sybase’s event stream (CEP) offerings would have optional integration with HANA, providing convergence between CEP and BI – where rules are used for stripping key event data for persistence in HANA. In so doing, analysis of event streams could be integrated or directly correlating with historical data.
  • Integrations are underway between HANA and IQ with Hadoop.
  • Sybase is extending its PowerDesigner data modeling tools to address each of its database engines.
Most of the announcements, like HANA going GA or Sybase ASE supporting SAP Business suite, were hardly surprises. Aside from go-to-market issues, which are many and significant, we’ll direct our focus on the technology roadmaps.

We’ve maintained that if SAP were serious about its database goals, that it had to do three basic things:
  1. Unify its database organization. The good news is that it has started down that path as of January 1 of this year. Of course, org charts are only the first step as ultimately it comes down to people.
  2. Branding. Although long eclipsed in the database market, Sybase still has an identifiable brand and would be the logical choice; for now SAP has punted.
  3. Cross-fertilize technology. Here, SAP can learn lessons from IBM which, despite (or because of) acquiring multiple products that fall under different brands, freely blends technologies. For instance, Cognos BI reporting capabilities are embedded into rational and Tivoli reporting tools.
Heavy lifting

The third part is the heavy lift. For instance, given that data platforms are increasingly employing advanced caching, it would at first glance seem logical to blend in some of HANA’s in-memory capabilities to the ASE platform; however, architecturally, that would be extremely difficult as one of HANA’s strengths –dynamic indexing – would be difficult to implement in ASE.

On the other hand, given that HANA can index or restructure data on the fly (e.g., organize data into columnar structures on demand), the question is, does that make IQ obsolete? The short answer is that while memory keeps getting cheaper, it will never be as cheap as disk and that therefore, IQ could evolve as near-line storage for HANA.

Of course that begs the question as to whether Hadoop could eventually perform the same function. SAP maintains that Hadoop is too slow and therefore should be reserved for offline cases; that’s certainly true today, but given developments with HBase, it could easily become fast and cheap enough for SAP to revisit the IQ question a year or two down the road.

SAP maintains that Hadoop is too slow and therefore should be reserved for offline cases.



Not that SAP Sybase is sitting still with Hadoop integration. They are providing MapReduce and R capabilities to IQ (SAP Sybase is hardly alone here, as most Advanced SQL platforms are offering similar support). SAP Sybase is also providing capabilities to map IQ tables into Hadoop Hive, slotting IQ as alternative to HBase.

In effect, that’s akin to a number of strategies to put SQL layers inside Hadoop (in a way, similar to what the lesser-known Hadapt is doing). And of course, like most of the relational players, SAP Sybase is also support the bulk ETL/ELT load from HDFS to HANA or IQ.

On SAP’s side for now is the paucity of Hadoop talent, so pitching IQ as an alternative to HBase may help soften the blow for organizations seeking to get a handle. But in the long run, we believe that SAP Sybase will have to revisit this strategy. Because, if it’s serious about the database market, it will have to amplify its focus to add value atop the new realities on the ground.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Tuesday, April 10, 2012

Top 10 ways HP is different and better when it comes to cloud computing

Today HP filled out its cloud computing strategy, a broad-based set of products and services, that set a date (finally!) for its public cloud debut (May 10).

See my separate earlier blog on all the news. Disclosure: HP is a long-time sponsor of my BriefingsDirect podcasts. I've come to know HP very well in the past 20 years of covering them as a journalist, analyst and content producer. And that's why I'm an unabashed booster of HP now, despite its well-documented cascade of knocks and foibles.

By waiting to tip its hand on how it will address the massive cloud opportunity, HP has clearly identified Cloud as a Business (CaaB) as the real, long-term opportunity. And HP appreciates that any way that it can propel CaaB forward for as many businesses, organizations and individuals as possible, then the more successful it will be, too.

Top 10 reasons

Here's why the cloud, as we now know it, is the best thing that could have happened to HP, and why HP is poised to excel from its unique position to grow right along with the global cloud market for many years. These are the top 10 reasons HP is different and better when it comes to cloud computing:

  1. Opportune legacy. HP is not wed to a profits-sustaining operating system platform, integration middleware platform, database platform, business applications suite, hypervisor, productivity applications suite, development framework, or any other software infrastructure that limits its ability to rapidly pursue cloud models without being hurt badly or fatally financially.

  2. HP has been ecumenical in supporting all the major operating environments, Unix, Linux, Windows -- as well as all major open-source and commercial IT stacks, middleware, virtual machines and applications suites across development and deployment longer and more broadly than any one, any where. This includes product, technology, and services. Nothing prevents HP from doing the same as other innovators arrive -- or adjusting as incumbents leave. HP's robust support of OpenStack and KVM now continues this winning score.

  3. Cloud-value software. The legacy computing software products that HP is deeply entrenched with -- application development lifecycle, testing and quality assurance, performance management, systems management, portfolio management, business services management, universal configuration management databases, enterprise service bus, SOA registry, IT financial management suite (to name a few) -- are all cloud-enablement value-adds. And HP has a long software-as-a-service (SaaS) heritage in test and development and other applications delivery. These are not millstones on the path to full cloud business model adoption, they are core competencies.

    These are not millstones on the path to full cloud business model adoption, they are core competencies.



  4. The right hardware. HP has big honking Unix and high-performance computing platforms, yes, but it bet big and rightly on energy- and space-efficient blades and x86 architecture racks, rooms, pods and advanced containers. HP saw the future rightly in virtualization for servers, storage and networking, and its various lines of converged infrastructure hardware and storage acquisitions are very-much designed of, by, and for super-efficient, fit-for-purpose uses like cloud.

  5. Non-sacred cash cows. HP has a general dependency on revenue from PCs and printers, for sure. But, unlike other cash-cow dependencies from other large IT vendors, these are not incompatible with large, robust and ruthless cloud capitalization. PCs and printers may not be growing like they used to, but high growth in cloud businesses won't be of a zero-sum nature with traditional hardware clients either. As with item number 1 above, the interdependencies do not prohibit rapid cloud model pursuit.

  6. Security, still the top inhibitor to cloud adoption, is not a product but a process born of experience, knowledge, and implementation at many technology points. HP wisely built, bought and partnered to architect security and protection into its products and services broadly. As the role of public cloud provider as first line of defense and protection to all its users grows, HP is in excellent shape to combine security services across hybrid cloud implementations and cloud ecosystem customers.

  7. Data and unstructured information. Because HP supports many databases, commercial and open source, it can act as neutral partner in mixed environments. It's purchase of Autonomy gives it unique strength in making unstructured data as analyzed, controlled and managed as structured, relational data -- even combing the analytics value between and among them them. The business value of data is in using it at higher, combined abstractions across clouds, a role HP can do more in, but nothing should hold it back. It's already providing MySQL cloud data services.

    HP can foster the technology services and new kinds of integrator of services along a cloud business process continuum that please both enterprise costumers and vertical cloud providers.



  8. Technology services and professional services. HP, again the partner more than interloper, developed technology services that support IT, help desks, and solutions-level build support requirements, but also did not become a global systems integrator like IBM, where channel conflict and "coopetition" work against ecosystem-level synergies. It is these process synergies now -- a harmonic supply chain among partners for IT services delivery (not systems-level integration) -- that cloud providers need in order to grow. HP can foster the technology services and new kinds of integrator of services along a cloud business process continuum that please both enterprise costumers and vertical cloud providers.

  9. Management, automation, remote services. Those enterprises and small and medium businesses (SMBs) making the transition from virtualization and on-premises data centers to cloud and hybrid models want to keep their hands firmly on the knobs of their IT, but see less and less of the actual systems. Remote management, unified management, and business service management are keystones to hybrid computing, and HP is a world leader. Again, they are core competencies to advanced cloud use by both the enterprises and cloud providers. And HP's performance management insights, continuous improvement, on the cloud and business services that becomes the key differentiator.

  10. Neutrality, trust, penetration and localization. While HP is in bed with everyone in global IT ecosystems, they are not really married. The relationship with Microsoft is a perfect example. HP is large enough not to be bullied (we'll see how Oracle does with that), but not too aggressive such that HP makes enemies and loses the ability to deliver solutions to the end users because of conflict in the channel or ecosystem. Cloud of clouds services and CaaB values will depend on trust and neutrality, because the partner to the cloud providers is the partner to the users. Both need to feel right about the relationship. HP may be far ahead of all but a few companies in all but a few markets in this role.
Full cloud continuum

The breadth and depth of HP's global ambitions evident from today's news shows its intent on providing a full cloud kit continuum -- from code to countless hybrid cloud options. Most striking for me is HP's strategy of enabling cloud provisioning and benefits for any enterprise, large or small. There are as many on-ramps to cloud benefits realization as there are types of users, as there should be. Yet all the cloud providers need to be compete in their offerings, and they themselves will look to outsource that which is not core.

As Ali Shadman, vice president and chief technologist in HP's Technology Consulting group, told me, HP's cloud provider customers need to deliver Apple "iCloud-like services," and they need help to get there fast. They need help knowing how to bill and invoice, to provide security, to find the right facilities. HP then is poised to become the trusted uncle with a host of cloud strengths for these myriad cloud providers worldwide as they grow and prosper.

This is different from selling piecemeal the means to virtualized private-cloud implementations, or putting a different meter on hosted IT services. This is not one-size-fits-all cloud APIs, or Heathkits for cloud hackers. This is not cloud in a box. HP is approaching cloud adoption as a general business benefit, as a necessary step in the evolution of business as an on-demand architectural advancement. It knows that clouds themselves are made up of lots of services, and HP wants a big piece of that supply chain role -- as partner, not carnivor.

HP then becomes the trusted uncle with a host of cloud strengths for these cloud providers as they grow and prosper.



Shadman said that HP is producing the type of public cloud that appeals to serious and technical providers and enterprises, not hobbyists and -- dare I say it, Silicon Valley startups on a shoestring. This is business-to-business cloud services for established, global, regional and SMB enterprises.

HP is showing that it can have a public cloud, but still be a partner with those building their own targeted public clouds. By viewing clouds as a supply chain of services, HP seeks to empower ecosystems at most every turn, recognizing that 1,000 points of cloud variability is the new norm.

We should expect managed service providers, independent software vendors, legacy enterprise operators to all want a type of cloud that suits their needs, their heritage and best serves their end customers. They will be very careful who they align with, who they outsource their futures to.

In other words, HP is becoming a booster of cloud models for any type of organization or business or government.



Of course, HP is seeking to differentiate itself from Amazon, Google, Microsoft, IBM, Oracle, VMware, Citrix and Rackspace. It seems to be doing it through a maturity mode approach to cloud, not positioning itself as the only choice, or one size fits all. HP wants to grow the entire cloud pie. HP seems confident that if cloud adoption grows, it will grow well, too, perhaps even better.

In other words, HP is becoming a booster of cloud models for any type of organization or business or government, helping them to not only build or acquire cloud capabilities, but seeding the business and productivity rationale for cloud as a strategy … indefinitely.

This is very much like Google's original strategy over the past 10 years, that anything that propels the Web forward for as many businesses, organizations and individuals as possible, then the more successful Google will be with its search and advertising model. This has, and continues to, work well for Google. It places a higher abstraction on the mission, more than just sell more ads, but grow the whole pie.

I believe we're early enough in the cloud game that the emphasis should be on safely enabling the entire cloud enterprise, of growing the pie for everyone. It's too soon to try and carve off a platform or application lock, and expect to charge a toll for the those caught in some sort of trap. Those days may actually be coming to an end.

You may also be interested in:

HP takes its cloud support approach public while converging its offerings around OpenStack, KVM

HP today announced major components and details for its HP Converged Cloud strategy, one that targets enterprises, service providers and small- and medium-sized business (SMBs) with a comprehensive yet flexible approach designed to "confidently" grow the cloud computing market fast around the globe.

HP has clearly exploited the opportunity to step back and examine how the cloud market has actually evolved, and taken pains to provide those who themselves provide cloud-enabled services with an architected path to business-ready cloud services. From its new Cloud Maps to HP Service Virtualization 2.0, HP is seeking to hasten and automate how other people's cloud services come to market.

And finally, HP has put a hard date on its own public cloud offering, called HP Cloud Services, which enters public beta on May 10. See my separate analysis blog on the news. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP's public cloud is geared more as a cloud provider support service set, rather than a componentized infrastructure-as-a-service (IaaS) depot for end-user engineers, start-ups and shadow IT groups. HP Cloud Services seems designed more to appeal to mainstream IT and developers in ISVs, service providers and enterprises, a different group than early public cloud entrant and leader Amazon Web Services has attracted.

There's also an urgency in HP's arrangement of products and services enabling a hastened path to hybrid computing values, built on standards and resisting lock-in, with an apparent recognition that very little in cloud's future will be either fully public or private. HP's has built a lot of its cloud architecture around "hardened open source technology from OpenStack," and chose the GNU-licensed KVM hypervisor for its implementation.

The OpenStack allegiance puts HP front and square in an open source ecosystem of cloud providers, including Rackspace, Red Hat and Cisco. IBM is said to be on board too with OpenStack. And former CloudStack charter contributor Citrix recently threw its efforts behind CloudStack under the Apache Foundation.

HP's OpenStack and KVM choices also keeps it at odds with otherwise partners Microsoft and VMware.

Data drives on-premises infrastructure spend now, but increased cloud spend in the future.



There were more details too today on HP's data services in the cloud strategy, apparently built on MySQL. We should expect more data services from HP, including information services from its recent Autonomy acquisition. The data and "information-as-a-service" support trend could be big wind in HP's cloud sails, and undercut its competitors on premises cash flow.

Data drives on-premises infrastructure spend now, but increased cloud spend in the future. As for the latter, what the data engine/platform is under the cloud hood is not as important as whether the skills in the market -- like SQL -- can use it readily, as is the case the the xSQL crowd.

Furthermore, the economics of data services hosting may favor HP. If HP can help cloud providers to store, manage and analyze MySQL and related databases and information as a service with the most efficiency, then those providers using HP cloud support not only beat out on-premises data services on cost, they beat out other non-HP cloud providers, too. Could data analytics services then become a commodity? Yes, and HP could make it happen, while making good revenue on the infrastructure and security beneath the data services.

The announcement

B
ut back to today's news:
  • Beginning May 10, HP Cloud Services will deliver its initial offering, HP public IaaS offerings, as pubic beta. These include elastic compute instances or virtual machines, online storage capacity, and accelerated delivery of cached content. Still in private beta will be a relational database service for MySQL and a block storage service which supports movement of data from one compute instance to another.

  • HP’s new Cloud Maps extends HP CloudSystem by adding prepackaged templates that create a catalogue of application services, cutting the time to create new cloud services for enterprise applications from months to minutes, says HP.

  • HP Service Virtualization 2.0, which tests the quality and performance of cloud or mobile applications without disrupting production systems and includes access to restricted services in a simulated, virtualized environment.

  • HP Virtual Application Networks to help speeds application deployment, automate management and ensures network service levels (SLAs) in delivering cloud and virtualized applications across the HP FlexNetwork architecture.

    There's also an urgency in HP's arrangement of products and services enabling a hastened path to hybrid computing values.



  • HP Virtual Network Protection Service adds security at the network virtualization management layer to help mitigate common threats.

  • HP Network Cloud Optimization Service helps clients enhance their network to improve cloud-based service delivery up to 93 percent compared to traditional download techniques, says HP.

  • HP Enterprise Cloud Services, outsourcing cloud management for private clouds, business continuity and disaster recovery services and unified communications.

  • In targeting a vertical industry capability, Engineering Cloud Transformation Services are designed to help product development and engineering design teams move to cloud.
As part of the announcement, HP also delivered new Cloud Security Alliance training courses.

More information about HP’s new cloud solutions and services is available at http://www.hp.com/go/convergedcloud2012.

You may also be interested in: