Tuesday, August 21, 2012

New levels of automation and precision needed to optimize backup and recovery in virtualized environments

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: Quest Software.

The benefits of server virtualization are clear for many more companies now as they reach higher percentages of workloads supported by virtual machines (VMs). But the complexity impacts on other IT functions can also ramp up quickly, in some cases jeopardizing the overall benefits.

When it come to the relationship between increasingly higher levels of virtualization and the need for new data backup and recovery strategies, for example, the impacts can be multiplier of improvement when both are done properly and in context to one another.

The next BriefingsDirect enterprise IT discussion then focuses on how virtualization provides an excellent on-ramp to improved data lifecycle benefits and efficiencies. What's more, the elevation of data to the lifecycle efficiency level also forces a rethinking of the culture of data, of who owns data, and when, and who is responsible for managing it in a total lifecycle.

This is different from the previous and current system of data management as a fragmented approach, with different oversight for data across far-flung instances and uses.

Here to share insights on where the data availability market is going -- and how new techniques are being adopted to make the value of data ever greater -- we're joined by John Maxwell, Vice President of Product Management for Data Protection, at Quest Software. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Why has server virtualization become a catalyst to data modernization?

Maxwell: I think it’s a natural evolution, and I don’t think it was even intended on the part of the two major hypervisor vendors, VMware and Microsoft with their Hyper-V. As we know, five or 10 years ago, virtualization was touted as a means to control IT costs and make better use of servers.

Utilization was in single digits, and with virtualization you could get it much higher. But the rampant success of virtualization impacted storage and the I/O where you store the data.

Upped the ante

I
f you look at the announcements that VMware did around vSphere 5, around storage, and the recent launch of Windows Server 2012, Hyper-V, where Microsoft even upped the ante and added support for Fibre Channel with their hypervisor, storage is at the center of the virtualization topic right now.

It brings a lot of opportunities to IT. Now, you can separate some of the choices you make, whether it has to do with the vendors that you choose or the types of storage, network-attached storage (NAS), shared storage and so forth. You can also make the storage a lot more economical with thin disk provisioning, for example.

There are a lot of opportunities out there that are going to allow companies to make better utilization of their storage, just as they've done with their servers. It’s going to allow them to implement new technologies without necessarily having to go out and buy expensive proprietary hardware.

From our perspective, the richness of what the hypervisor vendors are providing in the form of APIs, new utilities, and things that we can call on and utilize, means there are a lot of really neat things we can do to protect data. Those didn't exist in a physical environment.

It’s really good news overall. Again, the hypervisor vendors are focusing on storage, and so are companies like Quest when it comes to protecting that data.

Gardner: What is it about data that people need to think differently about?
First of all, people shouldn’t get too complacent.


Maxwell: First of all, people shouldn’t get too complacent. We've seen people load up virtual disks, and one of the areas of focus at Quest, separate from data protection, is in the area of performance monitoring. That's why we have tools that allow you to drill down and optimize your virtual environment from the virtual disks and how they're laid out on the physical disks.

And even hypervisor vendors -- I'm going to point back to Microsoft with Windows Server 2012 -- are doing things to alleviate some of the performance problems people are going to have. At face value, your virtual disk environment looks very simple, but sometimes you don’t set it up or it’s not allocated for optimal performance or even recoverability.

There's a lot of education going on. The hypervisor vendors, and certainly vendors like Quest, are stepping up to help IT understand how these logical virtual disks are laid out and how to best utilize them.

See it both ways

A t face value, virtualization makes it really easy to go out and allocate as many disks as you want. Vendors like Quest have put in place solutions that make it so that within a couple of mouse clicks, you can expose your environment, all your virtual machines (VMs) that are out there, and protect them pretty much instantaneously.

From that aspect, I don't think there needs to be a lot of thought, as there was back in the physical days, of how you had to allocate storage for availability. A lot of it can be taken care of automatically, if you have the right software in place.

That said, a lot of people may have set themselves up, if they haven’t thought of disaster recovery (DR), for example. When I say DR, I also mean failover of VMs and the like, as far as how they could set up an environment where they could ensure availability of mission-critical applications.

For example, you wouldn’t want to put everything, all of your logical volumes, all your virtual volumes, on the same physical disk array. You might want to spread them out, or you might want to have the capabilities of replicating between different hypervisor, physical servers, or arrays.

Gardner: I understand that you've conducted a survey to try to find out more about where the market is going and what the perceptions are in the market. Perhaps you could tell us a bit about the survey and some of the major findings.
Our survey showed that 70 percent of organizations now consider at least 50 percent of their data mission critical.


Maxwell: One of the findings that I find most striking, since I have been following this for the past decade, is that our survey showed that 70 percent of organizations now consider at least 50 percent of their data mission critical.

That may sound ambiguous at first, because what is mission critical? But from the context of recoverability, that generally means data that has to be recovered in less than an hour and/or has to be recovered within an hour from a recovery-point perspective.

This means that if I have a database, I can’t go back 24 hours. The least amount of time that I can go back is within an hour of losing data, and in some cases, you can’t go back even a second. But it really gets into that window.

I remember in the days of the mainframe, you'd say, "Well, it will take all day to restore this data, because you have tens or hundreds of tapes to do it." Today, people expect everything to be back in minutes or seconds.

The other thing that was interesting from the survey is that one-third of IT departments were approached by their management in the past 12 months to increase the speed of the recovery time. That really dovetails with the 50 percent of data being mission critical. So there's pressure on the IT staff now to deliver better service-level agreements (SLAs) within their company with respect to recovering data.

Terms are synonymous

The other thing that's interesting is that data protection and the term backup are synonymous. It's funny. We always talk about backup, but we don't necessarily talk about recovery. Something that really stands out now from the survey is that recovery or recoverability has become a concern.

Case in point: 73 percent of respondents, or roughly three quarters, now consider recovering lost or corrupted data and restoring those mission critical applications their top data-protection concern. Only 4 percent consider the backup window the top concern. Ten years ago, all we talked about was backup windows and speed of backup. Now, only 4 percent considered backup itself, or the backup window, their top concern.

So 73 percent are concerned about the recovery window, only 4 percent about the backup window, and only 23 percent consider the ability to recover data independent of the application their top concerns.

Those trends really show that there is a need. The beauty is that, in my opinion, we can get those service levels tighter in virtualized environments easier than we can in physical environments.

Gardner: What's the relationship between moving toward higher levels of virtualization and cutting costs?.
A company has to look at which policies or which solutions to put in place to address the criticality of data, but then there is a cost associated with it.


Maxwell: You have to look at a concept that we call tiered recovery. That's driven by the importance now of replication in addition to traditional backup, and new technology such as continuous data protection and snapshots.

That gets to what I was mentioning earlier. Data protection and backup are synonymous, but it's a generic term. A company has to look at which policies or which solutions to put in place to address the criticality of data, but then there is a cost associated with it.

For example, it's really easy to say, "I'm going to mirror 100 percent of my data," or "I'm going to do synchronous replication of my data," but that would be very expensive from a cost perspective. In fact, it would probably be just about unattainable for most IT organizations.

Categorize your data

What you have to do is understand and categorize your data, and that's one of the focuses of Quest. We're introducing something this year called NetVault Extended Architecture (NetVault XA), which will allow you to protect your data based on policies, based on the importance of that data, and apply the correct solution, whether it's replication, continuous data protection, traditional backup, snapshots, or a combination.

You can't just do this blindly. You have got to understand what your data is. IT has to understand the business, and what's critical, and choose the right solution for it.
What we see now are the traditional people who were responsible for physical storage taking over the responsibility of virtual storage.


... Because of the mission criticality of data, they're going from being people who looked at data as just a bunch of volumes or arrays, logical unit numbers (LUNs), to "these are the applications and this is the service level associated with the applications."

When they go to set up policies, they are not just thinking of, "I'm backing up a server" or "I'm backing up disk arrays,", but rather, "I'm backing up Oracle Financials," "I'm backing up SAP," or "I'm backing up some in-house human resources application."

Adjust the policy

And the beauty of where Quest is going is, what if those rules change? Instead of having to remember all the different disk arrays and servers that are associated with that, say the Oracle Financials, I can go in and adjust the policy that's associated with all of that data that makes up Oracle Financials. I can fine-tune how I am going to protect that and the recoverability of the data.

Gardner: How do we look at this shift and think about extending that policy-driven and dynamic environment at the practical level of use?

Maxwell: With the increased amount of virtual data out there, which just adds to the whole pot of heterogeneous environments, whether you have Windows and Linux, MySQL, Oracle, or Exchange, it's impossible for these people who are responsible for the protection and the recoverability of data to have the skills needed to know each one of those apps.

We want to make it as easy to back up and recover a database as it is a flat file. The fine line that we walk is that we don't want to dumb the product down. We want to provide intuitive GUIs, a user experience that is a couple of clicks away to say, "Here is a database associated with the application. What point do I want to recover to?" and recover it.

If there needs to be some more hands-on or more complicated things that need to be done, we can expose features to maybe the database administrator (DBA), who can then use the product to do more complex recovery or something to that effect.
It's impossible for these people who are responsible for the protection and the recoverability of data to have the skills needed to know each one of those apps.


We've got to make it easy for this generalist, no matter what hypervisor -- Hyper-V or VMware, a combination of both, or even KVM or Xen -- which database, which operating system, or which platform.

Again, they're responsible for everything. They're setting the policies, and they shouldn't have to be qualified. They shouldn't have to be an Exchange administrator, an Oracle DBA, or a Linux systems administrator to be able to recover this data.

We're going to do that in a nice pretty package. Today, there are many people here at Quest who walk around with a tablet PC as much as they do with their laptop. So our next-generation user interface (UI) around NetVault XA is being designed with a tablet computing scenario, where you can swipe data, and your toolbar is on the left and right, as if you are holding it using your thumb -- that type of thing.

Gardner: Are there any other technology approaches that Quest is involved with that further explain how some of these challenges can be met?.
We're envisioning customer environments where they're going to have multiple hypervisors, just as today people have multiple operating system databases.


Maxwell: There are two things I want to mention. Today, Quest protects VMware and Microsoft Hyper-V environments, and we'll be expanding the hypervisors that we're supporting over the next 12 months. Certainly, there are going to be a lot of changes around Windows Server 2012 or Hyper-V, where Microsoft has certainly made it a lot more robust.

There are a lot more things for us exploit, because we're envisioning customer environments where they're going to have multiple hypervisors, just as today people have multiple operating system databases.

We want to take care of that, mask some complexity and allow people to possibly have cross-hypervisor recoverability. So, in other words, we want to enable safe failover of a VMware ESXi system to Microsoft Hyper-V, or vice versa..

There's another thing that’s interesting and is a challenge for us and it's something that has challenged engineers here at Quest. This gets into the concepts of how you back up or protect data differently in virtual environments. Our vRanger product is the market leader with more than 40,000 customers, and it’s completely agentless.

As we have evolved the product over the past seven years, we've had three generations of the product and have exploited various APIs. But with vRanger, we've now gone to what is called a virtual appliance architecture. We have a vRanger service that performs backup and replication for one or hundreds of VMs that exist either on that one physical server or in a virtual cluster. So this VM can even protect VMs that exist on other hardware.

Scalability

The beauty of this is first the scalability. I have one software app that’s running that’s highly controllable. You can control what resources are replicating, protecting, and recovering all of my VMs. So that’s easy to manage, versus having to have an agent installed in every one of those VMs.

Two, there's no overhead. The VMs don’t even know, in most cases, that a backup is occurring. We use the services, in the case of VMware, of ESXi, that allows us to go out there, snapshot the virtual volumes called VMDKs, and back up or replicate the data.

There’s a service in Windows called Volume Shadow Copy Service, or VSS for short, and one of the unique things that Quest does with our backup software is synchronize the virtual snapshot of the virtual disks with the application of VSS, so we have a consistent point-in-time backup.

To communicate, we dynamically inject binaries into the VM that do the process and then remove themselves. So, for a very short time, there's something running in that VM, but then it's gone, and that allows us to have consistent backup.
One of the beauties of virtualization is that I can move data without the application being conscious of it happening.


That way, from that one image backup that we've done, I can restore an entire VM, individual files, or in the case of Microsoft Exchange or Microsoft SharePoint, I can recover a mailbox, an item, or a document out of SharePoint.

Replicate data


W
e replicate data amongst various Quest facilities. Then, we can bring up an application that was running in location A in point B, on unlike hardware. It can be completely different storage, completely different servers, but since they're VMs, it doesn’t matter.

That kind of flexibility that virtualization brings is going to give every IT organization in the world the type of failover capabilities that used to only exist for the Global 1000, where they used to have to set up a hot site or had to have a data center. They would use very expensive proprietary hardware-based replication and things like that. So you had to have like arrays, like servers, and all that, just to have availability.

Now, with virtualization, it doesn’t matter, and of course, we have plenty of bandwidth, especially here in the United States. So it’s very economical, and this gets back to our survey that showed that for IT organizations, 73 percent were concerned about recovering data, and that’s not just recovering a file or a database.

Two years ago, when people talked about cloud and data protection, it was just considering the cloud as a target. I would back up the cloud or replicate the cloud. Now, we are talking about actually putting data protection products in the cloud, so you can back up the data locally within the cloud and then maybe even replicate it or back it up back to on-prem, which is kind of a novel concept if you think about it.

If you host something up in cloud, you can back it up locally up there and then actually keep a copy on-prem. Also, the cloud is where we're certainly looking at having generic support for being able to do failover into the cloud and working with various service providers where you can pre-provision, for example, VMs out there.

You're replicating data. You sense that you have had a failure, and all you have to do is, via software, bring up those VMs, pointing them at the disk replicas you put up there.

Different cloud providers

Then, there's the concept of what you do if a certain percentage of all your IT apps are hosted in cloud by different cloud providers. Do you want to be able to replicate the data between cloud vendors? Maybe you have data that's hosted at Amazon Web Services. You might want to replicate it to Microsoft Azure or vice versa or you might want to replicate it on-premise (on-prem).

So there's going to be a lot of neat hybrid options. The hybrid cloud is going to be a topic that we're going to talk about a lot now, where you have that mixture of on-prem, off-prem, hosted applications, etc., and we are preparing for that.

Gardner: Are there some best practices you've seen in the market about how to go about this, or at least to get going?.
The cloud is where we're certainly looking at having generic support for being able to do failover into the cloud.


Maxwell: The number one thing is to find a partner. At Quest, we have hundreds of technology partners that can help companies architect a strategy utilizing the Quest data protection solutions.

Again, choose a solution that hits all the key points. In the case of VMware, you can go to VMware’s site and look for VMware Ready-Certified Solutions. Same thing with Microsoft, whether it’s Windows Server 2008 or 2012 certified. Make sure that you are getting a solution that’s truly certified. A lot of products say they support virtual environments, but then they don’t have that real certification, and a result, they can’t do lot of the innovative things that I’ve been talking about .

So find a partner who can help, or, we at Quest can certainly help you find someone who can help you architect your environment and even implement the software for you, if you so choose. Then, choose a solution that is blessed by the appropriate vendor and has passed their certification process.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: Quest Software.

You may also be interested in:

New Embarcadero AppWave for ISVs provides end-to-end mobile app experience for desktop PCs

Embarcadero Technologies today announced the availability of a new version of the AppWave business platform and PC app store designed to help independent software vendors (ISVs) improve their customer experience and drive new revenue growth.

AppWave for ISVs expands on the Embarcadero experience with AppWave for enterprise customers and their end-users. AppWave simplifies and expedites how desktop PC software is marketed, sold, delivered, tracked, and maintained, allowing existing client-server apps to be delivered more as a service. [Disclosure: Embarcadero Technologies is a sponsor of BriefingsDirect podcasts.]

Available via a free download, the AppWave platform gives users access to more than 250 free PC productivity apps for general business, marketing, design, data management, and development including OpenOffice, Adobe Acrobat Reader, 7Zip, FileZilla, and more, said Embarcadero, based in San Francisco.

AppWave users also can add internally developed and commercial software titles, such as Adobe Creative Suite products and Microsoft Visio, for on-demand access, control, and visibility into software titles they already own. Customer apps can also be converted for use on AppWave, via AppWave Studio tools.

Easily acquired apps

The AppWave platform converts valued, but often cumbersome business software into easily consumed and acquired "apps," so business users don't have to wait in line for IT to order, install, and approve the work tools that they really need without delay.

With AppWave, companies have a consumer-like app experience with the software they commonly use. With rapid, self-service access to apps, and real-time tracking and reporting of software utilization, the end result is a boost in productivity and lowering of software costs. Pricing to enable commercial and custom software applications to run as AppWave apps starts at $10 to $400 per app.

SaaS and the consumerization of IT have changed the way software is acquired and used.



There's a dynamic shift under way in the PC software market. Software as a service (SaaS) and the consumerization of IT have changed the way software is acquired and used. Vendors such as Salesforce.com and Google originated in the cloud, and Microsoft is moving its Office franchise to cloud-based streaming. Even enterprise IT is transitioning to a services-delivery model. End-users now expect the flexibility of mobile apps on their desktops, and ISVs must look for ways to meet these evolving demands.

To me, ISVs need to play both offense and defense in this new climate. They need to provide their customers ongoing value in the existing apps and delivery models, while also providing a path to the future, especially for mobile tier deleivery. AppWave for ISVs helps provide more runway for existing code, while setting up an on-ramp to the pure SaaS and cloud models. This is a revenue path too, allowing for licenses to continue from existing apps, even as services revenues develop.

What's more, using AppWave, ISVs can show their customers how the total costs of supporting apps goes down, via more precise payments based on actual use. So the AppWave path also works for enterprises as they need to transition to a services and pay-as-you-go model.

I can easily see too where Embarcadero may create it's own (or a partner network) to deliver AppWave apps from a cloud. This would be a "push" strategy for ISVs, and a "pull" strategy for enterprises as end-users could discover and procure apps with out the need for much IT involvement. Think of it as a vending machine for software, either from the corporate network or the cloud. For now, however, the AppWave use is for on-premises apps in enterprises.

“As a software vendor, we developed AppWave’s unique application delivery and consumption experience for our own enterprise customers,” said Wayne Williams, CEO of Embarcadero Technologies. “We’ve seen the benefits in revenue growth and customer satisfaction, and we believe other software vendors can emulate that success using AppWave for ISVs.”

AppWave for ISVs

Among the benefits in the new version of AppWave for ISVs:
  • ISVs can deliver new capabilities to market much sooner using AppWave’s frictionless “push” delivery model, keeping customers always up to date. They can instantly offer a full portfolio of on-demand applications, including beta and trial versions as well as updates and upgrades, without recoding or modifying existing products.

  • Through AppWave, ISVs can provide entire product portfolios on site at an enterprise for end-user review, trial, and use.

  • AppWave assists ISVs through new and expanded licensing opportunities. New users are acquired through peer referrals and on-demand trials. ISVs can also take advantage of integrated promotional capability including streaming banners and automated electronic direct mail to cross-sell additional applications, up-sell instant upgrades to fuller featured versions of products already being used, and offer other services designed to drive higher levels of customer retention, including extended service contracts.
User benefits

Customers and end users can also benefit from the new version's feature, including:
  • App discovery – Smart AppLinks and powerful search capabilities lead users to the right apps, delivering increased productivity and satisfaction.

  • App broadcast and socialization – Apps are published and streamed on-demand from AppWave, eliminating lagtime and the burden on enterprise IT.

  • App streaming – Desktop PC software is transformed into zero-install, zero-footprint apps that stream from public or private clouds.
You may also be interested in:

Thursday, August 16, 2012

Columbia Sportswear extends deep server virtualization to improved ERP operations, disaster recovery efficiencies

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

The latest BriefingsDirect end-user case-study uncovers how outerwear and sportswear maker and distributor Columbia Sportswear has used virtualization techniques and benefits to significantly improve their business operations.

We’ll see how Columbia Sportswear’s use of deep virtualization assisted in rationalizing its platforms and data center, as well as led to benefits in their enterprise resource planning (ERP) implementation. We’ll also learn how virtualizing mission-critical applications formed a foundation for improved disaster recovery (DR) best practices.

Stay with us now to learn more about how better systems make for better applications that deliver better business results with Michael Leeper, Senior Manager of IT Engineering at Columbia Sportswear, and Suzan Frye, Manager of Systems Engineering at Columbia Sportswear, in Portland, Oregon. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Tell me a little bit about how you got into virtualization. What were some of the requirements that you needed to fulfill at the data center level?

Leeper: Pre-2009, we'd experimented with virtualization. It'd be one of those things that I had my teams working on, mostly so we could tell my boss that we were doing it, but there wasn’t a significant focus on it. It was a nice toy to play with in the corner and it helped us in some small areas, but there were no big wins there.

Columbia Sportswear is the worldwide leader in apparel and accessories. We sell primarily outerwear and sportswear products, and a little bit of footwear, globally. We have about 4,000 employees, 50 some-odd physical locations, not counting retail, around the world. The products are primarily manufactured in Asia with sales distribution happening in both Europe and United States.

My teams out of the U.S. manage our global footprint, and we are the sole source of IT support globally from here.

In mid-2009, the board of directors at Columbia decided that we, as a company, needed a much stronger DR plan. That included the construction of a new data center for us to house our production environments offsite.

As we were working through the requirements of that project with my teams, it became pretty clear for us that virtualization was the way we were going to make that happen. For various reasons, we set off on this path of virtualization for our primary data center, as we were working through issues surrounding multiple data centers and DR processes.

Our technologies weren't based on the physical world any more. We were finding more issues in physical than we were in virtual. So we started down this path to virtualize our entire production world. By that point, mid-2010 had come around, and we were ready to go. We had built our DR stack that virtualized our primary data centers taking us to the 80 percent to 90 percent virtual machine (VM) rate.

Extremely successful


We were extremely successful in that process. We were able to move our primary data center over a couple of weekends with very little downtime to the end users, and that was all built on VMware technology.

About a week after we had finished that project, I got a call from our CIO, who said he had purchased a new ERP system, and Columbia was going to start down the path of a fully new ERP implementation.

I was being asked at that time what platform we should run it on, and we had a clean slate to look everywhere we could to find what our favorite, what we felt was the most safe and stable platform to run the crown jewels of the company which is ERP. For us that was going to be the SAP stack.

So it wasn't a hard decision to virtualize ERP for us. We were 90 percent virtual anyway. That’s what we were good at, and that’s where teams were staffed and skilled at. What we did was design the platform that we felt was going to meet our corporate standards and really meet our goals. For us that was running ERP on VMware.

Gardner: It sounds as if you had a good rationale for moving into a highly virtualized environment, but that then it made it easier for you to do other things.

It wasn't a hard decision to virtualize ERP for us. We were 90 percent virtual anyway.


Leeper: There are a couple of things there. Specifically in the migration to virtualization, we knew we were going to have to go through the effort of moving operating systems from one site to another. We determined that we could do that once on the physical side, relatively easily, and probably the same amount of effort as doing it once by converting physical to virtual.

The problem was that the next time we wanted to move services back from one facility to another in the physical world, we're going to have to do that work again. In the virtual space, we never had to do it again.

To make the teams go through the effort of virtualizing a server to then move it to another data center, we all need to do is do the work once. For my engineers, any time we get them to do the mundane stuff once it's better than doing it multiple times. So we got that effort taken care of in that early phase of the project to virtualize our environments.

For the ERP platform specifically, this was a net new implementation. We were converting from a JD Edwards environment running on IBM big iron to a brand-new SAP stack. We didn’t have anything to migrate. This was really built from scratch.

So we didn’t have to worry about a lot of the legacy configurations or legacy environments that may have been there for us. We got to build it new. And by that point in our journey, virtualized was the only way for us to do it. That’s what we do, it’s how we do it, and that's what we’re good at.

Across the board


Gardner: I saw some statistics that you went from 25 percent to 75 percent virtualization in about eight months, which is really impressive. How did you get the pace and what was important in keeping that pace going?

Frye: The only way we could do it was with virtualization, and using the efficiencies we gained. We centrally manage all of IT and engineering globally out of our headquarters in Portland. When we were given the initial project to move our data center and not only move our data center but provide DR services as well, it was a really easy sell to the business.

We could go to the business and explain to them the benefits of virtualization and what it would mean for their application. They wouldn’t have to rebuild and they wouldn’t have to bring in the vendor or any consultants. We can just take their systems, virtualize them, move them to our new data center, and then provide that automatic DR with Site Recovery Manager (SRM).

We had nine months to move our data center and we basically were all hands on deck, everybody on the server engineering team, storage, and networking teams as well. And we had executive support and sponsorship. It was very easy for us to go to the business market virtualization to the business and start down that path where we were socializing the idea. A lot of people, of course, were dragging their feet a little bit. We all know that story.

Once they realized that we could move their application, bring it back up, and then move it between data centers almost seamlessly, it was an instant win for us.



But once they realized that we could move their application, bring it back up, and then move it between data centers almost seamlessly, it was an instant win for us. We went from that 20 percent to 30 percent virtualization. We had about 75 percent when we were in the middle of our DR project, and today we’re actually at around 93 percent.

I think it surprises people that we have a "virtualize first" strategy today. Now it’s assumed that your system will be virtual and then all the benefits, the flexibility, the portability, the optimization, and the efficiencies that come with it.

But like most companies, we had to start with some of our lower tier or lower service-level agreement (SLA) systems, our development systems, and start working with the business on getting them to understand some of the benefits that they could gain by working with virtual systems.

Performance is there

Again people are always surprised. Will you have SQL virtualized? Do you have SAP virtualized? And the answer is yes, today we do, and the performance is there, the optimization is there, and that flexibility is there.

If you’re just starting out today, my advice would be to go ahead and start small. Give the business what they want, do it right, and give it the resources it needs to have. Don’t under-promise, over-deliver, and let the business start seeing the efficiencies that they can realize, and some of those hidden efficiencies as well.

We can support DR testing. We can support almost instant data refreshes, cloning, and snapping, so their upgrades are more seamless, and they have an easier back-out plan.

From an engineering and development perspective, we're giving them technologies that they could only dream of four or five years ago. And it’s really benefited the business in that we’re auto-provisioning. We’re provisioning in minutes versus days. We’re granting resources when needed.

It’s a more dynamic process for the business, and we’re really seeing that people are saying, "You’re not just a cost center anymore. You’re enabling us, you’re helping us to do what we need to do and basically doing it on-demand." So our team has really started shining these last few years, especially because of our high virtualization percentage.

If you set off trying to truly attack an entire data center virtualization project, you’re probably not going to be really successful at it



Leeper: For a company that's looking to move to this virtualization space, they’ve got to get some wins. You’ve got to tackle some environments or some projects that you can be successful at, and hopefully by partnering with some business users and business owners who are willing to take a little bit of a chance.

If you set off trying to truly attack an entire data center virtualization project, you’re probably not going to be really successful at it. There are a lot of ways that the business, application vendors, and various things can throw some roadblocks in this.

Once you start chipping away at a couple of them and get above the easy stuff, go find one that maybe on paper is a little difficult, but go get that one done. Then you can very quickly point back to success on that piece and start working your way through the rest of them.

Frye: As we were rolling out on some of our Tier 1 mission-critical applications, it was decided by the business that they wanted to test DR. They were going down the path of doing that the old-fashioned way by backing up databases, restoring databases, and taking weeks to do that, days and weeks.

We said, "We think we have a better way with SRM and our replication technologies. We have that data here. Why don't you let us clone that data and stand it up for you?" Literally, within 10 seconds, they had a replica of their data.

So we were enabling them to do their DR testing with SRM, on demand, when they wanted to do that, as well as giving them the benefit of doing the faster cloning and data refreshes. That was just a day-to-day, operational activity that they had no idea we could do for them.

It goes back to working with business and letting them know what you can do. From a day-to-day, practical perspective that was one of our biggest wins.



It goes back to working with business and letting them know what you can do. From a day-to-day, practical perspective that was one of our biggest wins. It's going to specific business units and application owners and saying, "We think we have a better way. What do you think about this?" Once they got their hands on it, just looking at their faces was really a good moment for us.

Gardner: Where do you go next with your virtualization payoff?

Private cloud

Leeper: We consider ourselves having up a private cloud on-site. My team will probably start laughing at me for using that term, but we do believe we have a very flexible and dynamic environment to deploy, based on business request on premises, and we're pretty proud of that. It works pretty well for us.

Where we go next is all over the place. One of the things we're pretty happy about is the fact that we can think about things a little differently now than probably a lot of our peers, because of how migratory our workloads can be, given the virtualization.

We started looking into things like hybrid cloud approaches and the idea of maybe moving some of our workloads out of our premises, our own data facilities, to a cloud provider somewhere else.

For us, that's not necessarily the discussion around the classic public cloud strategies for scalability and some of those things. For us, it's a temporary space at times, if we are, say, moving an office, we want to be able to provide zero downtime, and we have physical equipment on-premises.

It would be nice to be able to shutdown their physical equipment, move their data, move their workloads up to a temporary spot for four or five weeks, and then bring it back at some point, and let users never see an outage while they are working from home or on the road.

There are some interesting scenarios around DR for us and locations where we don't have real-time DR set up.



There are some interesting scenarios around significant DR for us and locations where we don't have real-time DR set up. For instance, we were looking into some issues in Japan, when Japan unfortunately a year or so ago was dealing with the earthquake and the tsunami fallout in power.

We were looking at how we can possibly move our data out of the country for a period of time, while the infrastructure was stabilizing, specifically power, and then maybe bring it back when things settle down again.

Unfortunately we weren't quite virtual on the edge yet there, but today we think that's something we could do. Thinking about how and where we move data to be at the right place at the right time is where we think the next big win for us.

Then, we get into the application profiles that users are asking for and their ability to spin up environments very quickly to just test something. It lets us get out of having IT as being the roadblock to innovation. A lot of times the business or part of our innovation teams come up with some idea on a concept, an application, or whatever it is. They don't have to wait for IT to fulfill their needs. The environments are right there for them.

So I challenge the teams routinely to think a little bit differently about how we've done things in the past, because our architecture is dramatically different than it was even two years ago.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Wednesday, August 15, 2012

ServiceMesh Agility Platform 8.0 aims to help enterprises rein in 'shadow IT' and enforce governance over rogue cloud usage

Cloud management and services orchestration platform provider ServiceMesh recently delivered Agility Platform 8.0, a major upgrade with features to help better govern and manage private, public, and hybrid cloud usage.

The platform provides Global 2000 enterprises with a consolidated platform for the consistent management, governance, orchestration and delivery of cloud applications, platforms and services. The control over application services -- without squelching the innovation of self-provisioned benefits -- has become acute for many organizations. Managing services by each cloud, SaaS provider or on-premises platform is complex, expensive and unwieldy.

And so ServiceMesh has identified the governance and policy-enabled orchestration of ecosystem-wide services as a crucial, burgeoning requirement for agile businesses, said Chairman and CEO Eric Pulier. "This is a policy-centric approach ... You need to gain a holistic view of applications," he said.

Agility Platform 8.0, which is delivered as an on-premises virtual appliance, allows companies to leverage services in on-demand, self-service IT service management (ITSM) operating model. The platform remains independent of the cloud or enterprise applications and services. APIs are available for developers so that new services can leverage Agility right away, even as it supports legacy and existing hybrid-delivered services, said Pulier.

The result is to compress IT service delivery times, lower IT operating costs, and increase investments in IT innovation, said ServiceMesh, a venture-backed start-up in Santa Monica, CA. Commonwealth Bank of Australia is using ServiceMesh to improve its services management.

ServiceMesh has a bold vision of enterprise agility via holistic services orchestration capabilities that manage both on-premises and cloud-based services, with automation of service lifecycles through policy-based definitions and enforcement.

Enterprise customers today are clearly seeking solutions to the dual challenges of making their current IT organizations more responsive to business change, while also ensuring that business users will not get around internal IT resource constraints and delays by selecting an unauthorized external cloud provider’s self-service, pay-as-you-go IT resources. So-called shadow IT deployment of services muddies the water, especially around control and security. BYOD is another complicating factor for more and more organizations.

What's more, governance, risk and compliance (GRC) requirements are also demanding the types of centrally managed solutions from Agility Platform 8.0, said Pulier. Services management policies can vary from department to department, region to region, even as an enterprise wants to standardize on cloud or SaaS applications. Automated orchestration and events processing logic allows for such complexity of services delivery, while banking on the efficiency and cost-savings of consolidated services origins.

Accelerate adoption


T
he ServiceMesh platform allows organizations to accelerate the adoption of cloud services across the enterprise and move business applications into the cloud with complete governance and control, said Pulier. The Agility Platform automates the deployment and management of cloud applications and platforms and ensures the portability of these services throughout their lifecycle, independent of the underlying private, public or hybrid cloud environment.

I have certainly seen many ways emerge in the market to try and solve the services management complexity equation, and they vary from VDI, to app stores, to SOA registries, to SOA ESBs, to PPM and extended configuration management databases (CMDBs).

Pulier says the ServiceMesh architected platform provides "a better source of truth" than these other approaches about services across their full lifecycle, and across vast IT infrastructure heterogeneity. "It's more than a catalog, and federates back to the CMDB and other management capabilities," he said.

"You need a holistic view of the problem, and to provide a platform for the business, not just the IT department," he said. This approach "creates infrastructure- and cloud-independent applications management," said Pulier.

ServiceMesh is targeting its platform at both enterprises and cloud services providers. Expect more news on the channel at VMworld later this month. While the ServiceMesh platform is on-premises now, it may also be deployed at the cloud provider layer, and many of its capabilities can also be delivered as a service.

More specifically, Agility Platform 8.0 leverages an extensible policy engine that enables the creation and enforcement of an unlimited range of custom policies. Among the features ServiceMesh offers are:
  • Wizard-based capabilities to discover and automatically import existing virtual machines (VMs) deployed from other third-party provisioning tools in either private or public cloud environments. Upon VM import, the platform enforces user specified policies on those VMs to ensure the desired governance, security and control. VMs can then be published through a service catalog.
  • Capabilities to monitor cloud-provider performance and adherence to SLAs, and to compare different cloud services, measuring a range of different cloud-provider operational parameters, such as average VM provisioning time, number of failed or degraded instances, maximum number of concurrent provisioning requests executed and others.

    More enterprises are realizing that they must evolve toward a self-service, on-demand IT operating model to increase their ability to innovate and address new market opportunities quickly.


  • Support for hybrid cloud strategies by enabling workload portability across a broad range of heterogeneous private and public cloud technologies. The latest release extends these capabilities with support for Microsoft System Center Virtual Machine Manager 2012 and Microsoft Hyper-V.
  • Improved extensible policy-based governance controls with new policy types to govern the sharing of pay-as-you-go IT resources within large corporate settings, including new options to control IT resource scheduling, sharing, leasing and chargeback.
  • A cloud-native architecture that dynamically scales to meet system demand, using only the amount of resources needed to rapidly execute provisioning requests, orchestrate auto-scaling operations, and perform other management functions.
You may also be interested in:

Monday, August 13, 2012

Ocean Observatories Initiative: Cloud and Big Data come together to give scientists unprecedented access to essential climate insights

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

A fascinating global ocean studies initiative helps best define some of the IT superlatives around big data, cloud computing, and middleware integration capabilities.

The Ocean Observatories Initiative (OOI) and its accompanying Cyberinfrastructure Program aims to provide an unprecedented ability to study the Earth's oceans and climate using myriad distributed data centers and literally oceans' worth of data.

The scale and impact of the science's importance is closely followed by the magnitude of the computer science needed to make that data accessible and actionable by scientists. In a sense, the OOI and its infrastructure program, a major undertaking by the National Science Foundation, are constructing a big data-scale programmable and integratable cloud fabric for oceanography.

We’ve gathered three leaders to explain the OOI and how the Cyberinfrastructure Program may not only solve this set of data and compute problems, but perhaps establish a path to how future massive data and analysis problems are solved.

Here to share their story on OOI are:
  • Matthew Arrott, Project Manager at the OOI Cyberinfrastructure. Matthew's career spans more than 20 years in design leadership and engineering management for software and network systems. He’s held leadership positions at Currenex, DreamWorks SKG, Autodesk, and the National Center for Supercomputing Applications. His most recent work has been with the University of California as e-Science Program Manager while focusing on delivering the OOI Cyberinfrastructure capabilities.
  • Michael Meisinger, Managing Systems Architect for the Ocean Observatories Initiative Cyberinfrastructure. Since 2007, Michael has been employed by the University of California, San Diego. He leads a team of systems architects on the OOI Project. Prior to UC San Diego, Michael was a lead developer in an Internet startup, developing a platform for automated customer interactions and data analysis. Michael holds a master's degree in computer science from the Technical University of Munich and will soon complete a PhD in formal services-oriented computing and distributed systems architecture.
The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Meisinger: The Ocean Observatories Initiative is a large, US National Science Foundation project intended to build a platform for ocean sciences with an operational life span of 30 years.

It comprises a construction period of five years and will integrate a large number of resources and assets. These range from typical oceanographic assets, like instruments that are mounted on buoys deployed in the ocean, to networking infrastructure on the cyberinfrastructure side. It also includes a large number of sophisticated software systems.

I'm the managing architect for the cyberinfrastructure, so I'm primarily concerned with the interfaces through the oceanographic infrastructure, including beta interfaces, networking interfaces, and then primarily, the design of the system that is the network hardware and software system that comprises the cyberinfrastructure.

OOI’s goals include serving the science and education communities with their needs for receiving, analyzing, and manipulating ocean sciences and environmental data. This will have a large impact on the science community and the overall public, as a whole, because ocean sciences data is very important in understanding the changes and processes of the earth, the environment, and the climate as a whole.

Ocean sciences, as a discipline, hasn't yet received as much infrastructure and central attention as other communities. So the OOI initiative is a very important to bring this to the community. It has an almost $400 million construction budget, and an annual operations budget of $70 million for a planned lifetime of 25 to 30 years.

Gardner: What are the big hurdles here in terms of a compute requirements? What makes this so challenging?

Arrott: It has a number of key aspects that we had to address. It's best to start at the top of the functional requirements, which is to provide interactive mission planning and control of the overall instrumentation on the 65 independent platforms that are deployed throughout the ocean.

The issue there is how to provide a standard command-and-control infrastructure over a core set of 800 instruments, about 50 different classes of instrumentation, as well as be able to deploy -- over the 30-year lifecycle -- new instrumentation brought to us by different scientific communities for experimentation.

The next is that the mission planning and control is meant to be interactive and respond to emergent changes. So we needed an event-response infrastructure that allowed us to operate on scales from microseconds to hours in being able to detect and respond to the changes. We needed an ability to move computing throughout the network to deal with the different latency requirements that were needed for the event-response analysis.

Finally, we have computational nodes all the way down in the ocean, as well as on the shore stations, that are accepting or acquiring the data coming off the network. And we're distributing that data in real time to any one who wants to listen to the signals to develop their own sense-and-response mechanisms, whether they're in the cloud, in their local institutions, or on their laptop.

Domain of control

The fundamental challenge was the ability to create a domain of control over instrumentation that is deployed by operators and for processing and data distribution to be agile in its deployment anywhere in the global network.

Gardner: Why is this a good time to try to solve this from a software distribution and data distribution perspective?

Richardson: It's the scale that's changed the architecture and deployment patterns that people have been using for these applications.

We can see the OOI project is essentially bringing the science needed to collaborate between vast numbers of sensors and signals and a comparatively smaller number of scientists, research institutions, and scientific applications to do analytics in a similar way as to how Facebook combines what people say, what pictures they post, what music they listen to with everybody’s friends, and then allow an application to be attached to that.

So it’s a huge technology challenge that would have been simply infeasible 12 years ago in the year 2000, when we thought things were big, but they were not. Now, when we talk about big data being masses of terabytes and petabytes that need to be analyzed all the time, then we’re starting to glimpse what's possible with the technology that’s been created in the last 10 years.

It’s a huge technology challenge that would have been simply infeasible 12 years ago.



If we had been talking about this 12 years ago, in the year 2000, we would have been talking about companies like Google and Yahoo, which we would not have considered to be of moderate scale.

Since then, many companies have appeared. For example, Facebook, which has many hundreds of millions of users connecting throughout the world, shares vast amounts of data all the time.

In addition to that, many of these companies have brought out essentially a platform capability, whereby others, such as Zynga, in the case of Facebook, can create applications that run inside these networks -- social networks in the case of Facebook.

Arrott: The challenge goes beyond just the big data challenge. It also now introduces, as Alexis said, the concept of the instrument as an equal partner with the human in the participation in the network.

So you now have to think about what it means to have a device that’s acting like a human in the network, and the notion that the instrument is, in fact, owned by someone and must be governed by someone, which is not the case with the human, because the human governs themselves. So it represents the notion of an autonomous agent in the network, as well as that agent having a notion of control that has to stay on the network.

Gardner: I’d like to try to explain for our audience a bit more about what is going on here. We understand that we have a tremendous diversity of sensors gathering in real-time a tremendous scale of data. But we’re also talking about automating the gathering and distribution of that data to a variety of applications.

Numerical framework

We’re talking about having applications within this fabric, so that the output is not necessarily data, but is a computational numerical framework that’s then distributed. So there's a lot of data, a lot of logic, and a lot of scale. Can one of you help step me through it all a bit more to understand the architecture of what’s being conducted here?

Meisinger: The challenge, as you mentioned, is very heterogeneous. We deal with various classes of sensors, classes of data, classes of users, or even communities of users, and with classes of technological problems and solution spaces.

So the architecture is based on a tiered model or in a layered model of most invariant things at the bottom, things that shouldn’t change over the lifetime of 30 years to serve the highest level of attention.

Then, we go into our more specialized layered architecture where we try to find optimal solutions using today’s technologies for high-speed messaging, big data, and so on. Then, we go into specialized solutions for specific groups of users and specific sensors that are there as last-mile technologies to integrate them into the system.

Then as you go towards the core, you approach the invariants of the system.



So you basically see an onion layer model of the architecture, externalization outside. Then as you go toward the core, you approach the invariants of the system.

This architecture is based on defining a common interaction format. It’s based on defining a common data format. Our architecture is strongly communication-oriented, service-oriented, message-oriented, and federated.

As Matthew mentioned, it’s an important means to have the individual resources, agents, provide their own policies, not having a central bottleneck in the system or central governing entity in the system that defines policies.

Strongly federated


Arrott: Think of it as its four core layers. There is the underlying network resource management layer. We talk about agents. They supply that capability to any process in the system, and we create devices that process.

The next layer up is the data layer, and the data layer consists of two core parts. One is the distribution system that allows for data to be moved in real-time from the source to the interested parties. It’s fundamentally a publish-subscribe (pub-sub) model. We're currently using point-to-point as well as topic-based subscriptions, but we're quickly moving toward content-based routing, which is more based on the the selector that is provided by the consumer to direct traffic toward them.

The other part of the data layer is the traditional harvesting or retrieval of data from historical repositories.



The other part of the data layer is the traditional harvesting or retrieval of data from historical repositories.

The next layer up is the analytic layer. It looks a lot like the device layer, but is focused on the management of processes that are using the big data and responding to new arrival of data in the network or change in data in the network. Finally, there is the fourth layer, which is the mission planning and control layer, which we’ll talk about later.

Gardner: Alexis, when you saw the problem that needed to be solved here, you had a lot of experience with advanced message queuing protocol (AMQP). Why did this problem seems to be the right fit for that particular technology, RabbitMQ, and a messaging infrastructure in general?

Richardson: What Matthew and Michael have described can be broken down into three fundamental pieces of technology.

Lot of chatter

Number one, you have a lot of chatter coming from these devices -- machines, people, and other kinds of processes -- and that needs to get to the right place. It's being chattered or twittered away and possibly at high rates and high frequencies. It needs to get to just the set of receivers following that stream, very similar to how we understand distribution to our computers. So you need what’s called pub-sub, which is a fundamental technology.

In addition, that data needs to be stored somewhere. People need to go back and audit it, to pull it out of the archive and replay it, or view it again. So you need some form of storage and reliability built into your messaging network.

Finally, you need the ability to attach applications that will be written by autonomous groups, scientists, and other people who don’t necessarily talk to one another, may choose these different programming languages, and may be deploying our applications, as Matthew said, on their own servers, on multiple different clouds that they are choosing through what you would like to be a common platform. So you need this to be done in a standard way.

AMQP is unique in bringing together pub-sub with reliable messaging with standards, so that this can happen. That is precisely why AMQP is important. It's like HTTP and email SMTP, but it’s aimed at messaging the publish-subscribe reliable message delivery in a standard way. And RabbitMQ is one of the first implementations, and that’s how we ended up working with the OOI team -- because RabbitMQ provides these and does it well.

Gardner: I’d also like to go back to the project itself, and give our listeners a sense of what this can accomplish. I’ve heard it described as "the Hubble Telescope of oceans.

It's the notion that we're providing capabilities that do not currently exist for oceanographers.

"

Let’s go back to the oceanography and the climate science. What can we accomplish with this, when this data is delivered in the fashion we’ve been discussing, where the programmability is there, where certain scientists can interact with these sensors and data, ask it to do things, and then get that information back in a format that’s not raw, but is in fact actionable intelligence?

Matthew, what could possibly happen in terms of the change in our understanding of the oceans from this type of undertaking?

Meisinger: The primary mission of our project is to provide this platform, the space telescope in the ocean. And it’s not a single telescope. In our case, it's a set of 65 buoys, locations in the ocean, and even a cable that runs a 1,000 miles at the seafloor of the Pacific Northwest that provides 10 gigabit ethernet connectivity to the instrument, and high power.

The primary mission of our project is to provide this platform, the space telescope in the ocean.



It’s a model where scientists have to compete. They have to compete for a slot on that infrastructure. They'll have to apply for grants and they'll have to reserve the spot, so that they can accomplish the best scientific discoveries out of that system.

It’s kind of the analogy of the space telescope that will bring ocean scientists to the next level. This is our large platform, our large infrastructure that have the best scientists develop and research to best results. That’s the fascination that I see as part of this project.

Arrott: The way to think about this can be summed up as continual presence in the oceans at multiple scales through multiple perspectives.

The scope of the OOI is such that it is considered to be observing the ocean at multiple scales -- coastal, regional, and global. It is an expandable model. One of the largest classes of applications that we’ll attach to the network are the modeling, in particular the nowcast and forecast modeling.

Happening at scale

Once you have that ability to actually model the oceans and predict where it’s going, you can use that to refocus the instrumentation on emergent events. It's this ability to have long-term presence in the ocean, and the ability to refocus the instrumentation on emergent events, that really represents the revolutionary change in the formation of this infrastructure.

Gardner: Is this in some ways taking the weather of the oceans?

Arrott: There's a movement to instrument the Earth, so that we can understand from observation, as opposed to speculation, what the Earth is actually doing, and from a notion of climate and climate change, what we might be doing to the Earth as participants on it.

The weather community, because of the demand for commercial need for that weather data, has been well in advance of the other environmental sciences in this regard. What you'll find is that OOI is just one of several ongoing initiatives to do exactly what weather has done.

Science more mature


Gardner: How is it that cloud computing is being brought to bear, making this productive, and perhaps even ahead of where the whole weather and predicting weather has been?

Richardson: Happily, that’s an easy one. Imagine if a person or scientist wanted to process very quickly a large amount of data that’s come from the oceans to build a picture of the climate, the ocean, or anything to do with the coastal proprieties of the North American coast. They might need to borrow 10,000 or 20,000 machines for an hour, and they might need to have a vast amount of data readily accessible to those machines.

In the cloud, you can do that, and with big data technologies today, that is a realistic proposition. It was not five to 10 years ago. It’s that simple.

Obviously, you need to have the technologies, like this messaging that we talked about, to get that data to those machines so they can be processed. But, the cloud is really there to bring it altogether and to make it seem to the application owner like something that’s just ready for them to acquire it, and when they don’t need it anymore, they can put it back and someone else can use it.

Its common execution infrastructure subsystem is built in order to enable this access to computation and big data very quickly.



Gardner: How are cloud models enabling this at an unprecedented scale, but also at an efficient cost?

Meisinger: It does enable computing at unprecedented scale. A lot of the earth's environment is changing. Assume that you’re interested in tracking the effect of a hurricane somewhere in the ocean and you’re interested in computing a very complex numerical model that provides certain predictions about currents and other variables of the ocean. You want to do that when the hurricane occurs and you want to do it quickly. Part of the strategy is to enable quick computation on demand.

The OOI architecture, in particular, its common execution infrastructure subsystem, is built in order to enable this access to computation and big data very quickly. You want to be able to make use of execution provider’s infrastructure as a service very quickly to run your own models with the infrastructure that the OOI provides.

Then, there are other users that want to do things more regularly, and they might have their own hardware. They might run their own clusters, but in order to be interoperable, and in order to have excess overflow capabilities, it’s very important to have cloud infrastructure as a means of making the system more homogenous.

So the cloud is a way of abstracting compute resources of the various participants of the system, be they commercial or academic cloud computing providers or institutions that provide their own clusters as cloud systems, and they all form a large compute network, a compute fabric, so that they can run the computation in a predictable way, but also then in a very episodic way.

Cloud as enabler


I really see that the cloud paradigm is one of the enablers of doing this very efficiently, and it enables us as a software infrastructure project to develop the systems, the architecture, to actually manage this computation from a system’s point of view in a central way.

Gardner: Alexis, because of AMQP and the VMware cloud application platform, it seems to me that you’ve been able to shop around for cloud resources, using the marketplace, because you’ve allowed for interoperability among and between platforms, applications, tools, and frameworks.

Is it the case that leveraging AMQP has given you the opportunity to go to where the compute resources are available at the lowest cost when that’s in your best interest?

Richardson: The dividend of interoperability for the end-user and the end-customer in this platform environment is ultimately portability -- portability through being able to choose where your application will run.

Michael described it very well. A hurricane is coming. Do you want to use the machines provided by the cloud provider here for this price? Do you want to use your own servers? Maybe your neighboring data center has servers available to you, provided those are visible and provided there is this fundamental interoperability through cloud platforms of the type that we are investing in. Then, you will be able to have that choice. And that lets you make these decisions in a way that you could not do before.

Providing a strong platform or a strong technological footprint that’s not specific to any technology is a great benefit to the community out there.



Gardner: It’s been mentioned by Alexis and others that this has some common features to Twitter or Facebook.

We think of the social environment because of the scale, complexity, and the use of cloud models. But we’re doing far more advanced computational activities here. This is simply not a display of 140 characters, based on a very rudimentary search, for example. These are at the high performance computing (HPC) level, supercomputer-level types of requests and analysis.

So are we combining the best of a social fabric approach and the architecture behind that to what we’ve been traditionally exposed to in high-performance computing and supercomputing, and what does that mean for the future?

Meisinger: This is the direction in which the future will evolve, and it’s the combination of proven patterns of interaction that are emerging out of how humans interact applied to high-performance computing. Providing a strong platform or a strong technological footprint that’s not specific to any technology is a great benefit to the community out there.

Providing a reference architecture and a reference implementation that can solve these problems, that social network for sensor networks and for device computation will be a pattern that can be leveraged by other interested participants, either by participating in the system directly or indirectly, where it’s just taking that pattern and the technologies that come with it and basically bringing it to the next level in the future. Developing it as one large project in a coherent set really yields a technology stack and architecture that will carry us far into the future.

Arrott: With all the incremental change that we're introducing is taking the concepts of Facebook and of Twitter and the notions of Dropbox, which is the ability to move a file to a shared place so someone else can pick it up later, which was really not possible long ago. I had to do an FTP server, put up an HTTP server to accomplish that.

Sharing processes

W
hat we are now adding to the mix is not sharing just artifacts, but we’re actually sharing processes with one another, and then specifically sharing instrumentation. I can say to you, "Here, have a look through my telescope." You can move it around and focus it.

Basically, we introduced the concept of artifacts or information resources, as well as the concept of a taskable resource, and the thing that we’re adding to that which can be shared are taskable resources.

Meisinger: This pattern is very applicable, and it’s not that frequent that a research and construction project of that size has an ability to provide an end-to-end technology solution to this challenge of big data combined with real-time analysis and real-time command and control of the infrastructure.

What I see that’s evolving into is, first of all, you can take the solutions build in this project and apply it to other communities that are in need for such a solution. But then it could go further. Why not combine these communities into a larger system? Why not federate or connect all these communities into a larger infrastructure that all is based on common ideas, common standards, and that still enables open participation?

It’s a platform where you can plug in your own system or subsystem that you can then make available to whoever is connected to that platform, whoever you trust. So it can evolve into a large ecosystem, and that does not have to happen under the umbrella of one organization such as OOI.

Larger ecosystem

I
t can happen to a larger ecosystem of connected computing based on your own policies, your own technologies, your own standards, but where everyone shares a common piece of the same idea and can take whatever they want and not consume what they’re not interested in.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Wednesday, August 8, 2012

Infosys unveils Cloud Ecosystem Hub as unified enterprise gateway to hybrid cloud environments

Infosys today launched the Infosys Cloud Ecosystem Hub so enterprises can better create, adopt and govern cloud services across a business ecosystem.

The move shows the demand for managing "cloud of cloud" services, and the rapidly growing need for gaining better control over hybrid services delivery -- for both businesses and cloud services providers. I think Infosys's move also shows that one-size-fits-all public clouds will become behind-the-scenes utilities, and that managing services in a business ecosystem context is where the real value will be in cloud adoption.

Infosys says that businesses can accelerate time to market of cloud services by up to 40 percent, improve productivity by up to 20 percent, and achieve cost-savings of up to 30 percent by using it's Cloud Ecosystem Hub.

A unified self-service catalog feature allows cloud services to quickly subscribe to relevant IT and business services across multiple environments. It also helps dynamically provision IT infrastructure and platforms across a hybrid cloud environment in minutes.

This solution allows clients to fully realize the benefits from the long-standing promise of the cloud.



The smart brokerage feature of the hub provides an enterprise-wide decision support mechanism to select, compare, and deploy cloud services from across providers. Decisions can be based on evaluation of over 20 parameters, such as quality of service, technology compatibility, regulatory compliance needs, and total cost of ownership (TCO) of application workloads.

The hub provides a single-window view of the enterprise cloud ecosystem and brings cohesion into what could be an otherwise fragmented IT environment – across private and public cloud. It also enables easy monitoring of cloud resource usage and optimizes utilization and provides consolidated metering and billing, enabling service chargebacks.

According to Vishnu Bhat, Vice-President and Global Head - Cloud, Infosys, "Our clients are dealing with complexities of a fragmented cloud environment. The Infosys Cloud Ecosystem Hub provides organizations a unified gateway to build, manage, and govern their hybrid cloud ecosystem. This solution allows clients to fully realize the benefits from the long-standing promise of the cloud.”

You may also be interested in: