Thursday, August 23, 2012

Legal services leader Foley & Lardner makes strong case for virtual desktops

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

T
he latest BriefingsDirect enterprise user IT adoption story centers on how global legal services leader Foley & Lardner LLP has adopted virtual desktops and bring-your-own-device (BYOD) to enhance end-user productivity across their far-flung operations.

We'll see how Foley has delivered applications, data, and services better and with improved control -- even as employees have gained more choices and flexibility over the client devices, user experiences, and applications usage.

Learn more here about adapting to the new realities of client computing and user expectations with Linda Sanders, the CIO, and Rick Varju, Director of Engineering & Operations, both at Foley & Lardner LLP. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: What was "the elephant in the room," when it came to the old way of doing client-side computing? Was there something major that you needed to overcome?

Sanders: Yes, we had to have a reduction in our technology staffing, and because of that, we just didn't have the same number of technicians in the local offices to deal with PCs, laptops, re-imaging, and lease returns -- the standard things that we had done in the past. We needed to look at new ways of doing things, where we could reduce the tech touches, as we call it, and find a different way to provide a desktop to people in a fast, new way.

Varju: From a technical perspective, we were looking for ways to manage the desktop side of our business better, more efficiently, and more effectively. Being able to do that out of our centralized data center made a lot of sense for us.

Other benefits have come along with the centralized data center that weren't necessarily on our radar initially, and that has really helped to improve efficiencies and productivity in several ways.

Gardner: Tell us about your organization at Foley. Linda, how big are you, where do you do business?

Virtualized desktops

Sanders: Foley has approximately 900 attorneys and another 1,200 support personnel. We're in 18 U.S. offices, where we support virtualized desktops. We have another three international offices. At this time, we're not doing virtualized desktops there, but it is in our future.

Gardner: Rick, how has virtual desktop infrastructure (VDI) been an enabler?

Varju: The real underlying benefit is being able to securely deliver the desktop as a service (DaaS). We are no longer tied to a physical desktop and that means you can now connect to that same desktop experience, wherever you are, anytime, from any device, not just to have that easy access, but to make it secure by delivering the desktop from within the secure confines of our data center.

That's what's behind deploying VDI and embracing BYOD at the same time. You get that additional security that wouldn't otherwise be there, if you had to have all your applications and all data reside on that endpoint device that you no longer have control over.

With VMware View and delivering the DaaS from the data center, very little information has to go back to the endpoint device now, and that's a great model for our BYOD initiatives.

Mobile devices

In terms of raw numbers, every attorney in the firm has a mobile device. The firm provides a BlackBerry as part of our standard practice and then we have users who now are bringing in their own equipment. So at least 900 attorneys are taking advantage of mobility connectivity, and most of those attorneys have laptops, whether they are firm issued or BYOD.

Easily 1,500 personnel taking advantage of some sort of connectivity to the firm through their mobile devices.

Gardner: So as IT and business management, you get a better control and a sense of security, and the users get choice and flexibility?

Sanders: That's correct. Before, we were selecting the equipment, providing that equipment to people, and over and over again, we started to hear that that's not what they wanted. They wanted to select the machine, whether it be a PC, a Mac, an iPad, or smartphone. And even if we were providing standard equipment, we knew that people were bringing in their own. So formulating a formal BYOD program worked out well for us.

In our first year, we had 300 people take advantage of that formal program. This year, to date, we have another 200 who have joined, and we are expecting to add another 100 to that.

As Rick mentioned, we did also open this up to some of our senior level administrative management this year and we now have some of those individuals on the program. So that too is helping us, because we don't have to provision and lease that equipment and have our local technology folks get that out to people and be swapping machines.

Now, when we're taking away a laptop, for example, we can put a hosted desktop in and have people using VMware View. They're seeing that same desktop, whether they're sitting in the office or using their BYOD device.
They're seeing that same desktop, whether they're sitting in the office or using their BYOD device.


Gardner: Do you have any metrics in terms of how much this all saved you?

Sanders: Over three years, we'll probably be able to reduce our spend by about 22 percent.

Realistic number

We had our business manager within technology calculate for us what we were spending year after year on equipment, factoring in how much tech time is involved in that, and coming up with a realistic number, where people could go out and purchase equipment over a three-year time frame.

That was the start of it, looking at that breakdown of the internal time, selecting a dollar amount, and then putting together a policy, so that individuals who decided to participate in it would know what the guidelines were.

Our regional technology managers met one on one or in small groups with attorneys who wanted to go on the program, went through the program with them, and answered any questions upfront, which I think really served us well. It wasn’t that we just put something out on paper, and people didn’t understand what they were signing up for.

Those meetings covered all the high points, let them know that this was personal equipment and that, in the end, they're responsible for it should something happen. That was how we put the program together and how we decided to communicate the information to our attorneys.

Gardner: Has something about the DaaS allowed you to extend these benefits beyond just your employees? Is there some aspect of this that helps on that client services equation.
That does provide some additional benefit for our attorneys, when it comes to delivering the best possible service we can to our clients.
?

Varju: The ease of mobility and some of the productivity gains make a big difference. The quicker we can get access to people and information for our attorneys, no matter where they are and no matter what the device they're using, is really important today. That does provide some additional benefit for our attorneys, when it comes to delivering the best possible service we can to our clients.

One of the things that we're looking at now is unified communications, and trying to pull everything to the desktop, all the experiences together, and one of those important components is collaboration.

If we can deliver a tool that will allow attorneys and clients to collaborate on the same document, from within the same desktop view, that would provide tremendous value. There are certainly products out there that will allow you to federate with other organizations. That’s the line of thinking we're looking at now and we'll look to deploy something like that in the near future.

The biggest plus

Sanders: The biggest plus is, as Rick mentioned, for people who are mobile, is that they have the same desktop, no matter where they are. As I talked about before, whether they're in the office or out of the office, they have the same experience.

If we have a building shut down, we are not trapped into not being able to deliver a desktop, because they can’t get into the building and they can’t work inside. They're working from outside and it’s just like they are sitting here. That’s one of the biggest pluses that we've seen and that we hear from people -- just that availability of the desktop.

Varju: Before deploying VDI and VMware View, we delivered a more generic desktop for remote access. So to Linda’s point, being able to have your actual desktop follow you around on whatever device you are using is big. Then it's the mobility, even from within the office.

When an attorney signs up for the Technology Allowance Program, we provide them a thin client on their desk, which they use when they're sitting in their office. Then, as part of the Technology Allowance Program and Freedom of Choice, they purchase whatever mobility technology suits them and they can use that technology when working out of conference rooms with clients, etc.
The ability to move and work within the office, whether in a conference room, in a lobby, you name it, those are powerful features for the attorneys.


So remote access and having their own personal desktop follow them around, the ability to move and work within the office, whether in a conference room, in a lobby, you name it, those are powerful features for the attorneys.

We're definitely ahead of the curve within the legal vertical. Other verticals have ventured into this. Two in particular have avoided it longer than most, the healthcare and financial industries. But without a doubt, we're ahead of the curve amongst our peers, and there are some real benefits that go along with being early adopters.

Gardner: Explain for me, Rick, how you went about architecting this solution, and perhaps a little bit about the journey, and both good and bad experiences there?

Process and strategy

Varju: We've been virtualizing servers for quite some time now. Our server environment is just over 75 percent virtualized. Because of the success we have had there, and the great support from VMware, we felt that it was a natural fit for us to take a close look at VMware View as a virtual desktop solution.

We started our deployment in October of 2009. So we started pretty early, and as is often the case with being an early adopter, you're going to go through some pain being among the first to do what you are doing.

In working with our vendor partners, VMware, as well as our storage integrators, what we learned early on is that there wasn’t a lot of real-world experience for us to draw from when designing or laying out the design for the underlying infrastructure. So we did a lot of crawling before we walked, walking before we ran, and a lot of learning as we went.

But to VMware’s credit, they have been with us every step of the way and have really taken joint ownership and joint responsibility of this project with Foley. Whenever we have had issues, they have been very quick to address those issues and to work with us. I can't say enough about how important that business relationship is in a project of this magnitude.

While there was certainly some pain in the early stages of this project and trying to identify what infrastructure components and capacities needed to be there, VMware as a partner truly did help us get through those, and quite effectively.
To VMware’s credit, they have been with us every step of the way and have really taken joint ownership and joint responsibility of this project with Foley.


PCoverIP
protocol is critical to the overall VDI solution and delivering the DaaS, whether it's inside the Foley organization and the WAN links that we have between our offices, or an attorney who is working from home, a Starbucks or you name it. PCoverIP as a protocol is optimized to work over even the lowest of bandwidth connections.

The fact that you're just sending changes to screens really does optimize that communication. So the end result is that you get a better user experience with less bandwidth consumption.

Freedom of choice

Sanders: The success that we've had, as we have spoken about throughout this call, has been the ability to deliver that desktop and to have attorneys speak to their peers and let them know. Many times, we have attorneys stop us in the hallway to find out how they too can get on a hosted desktop.

Leveraging with the BYOD program helped us, giving people that freedom of choice, and then providing them with a work desktop that they can access from wherever.

We're really looking at unified communications. One of the things that I'm very interested in is video at the desktop. It's something that I am going to be looking at, because we use video conferencing extensively here, and people really like that video connection.

They want to be able to do video conferencing from wherever they are, whether it's in a conference room, outside the office, on their laptop, on a smartphone. Bringing in that unified communication is going to be one of the next things we're going to focus on.
Any time we look at a change in technology, especially the underlying infrastructure, we always take a look at what cloud services are available and have to offer.


Varju: Cloud computing is certainly an interesting topic and one that you can spend a day on, in and of itself. At Foley, any time we look at a change in technology, especially the underlying infrastructure, we always take a look at what cloud services are available and have to offer, because it's important for us to keep our eye on that.

There is another area where Foley is doing things differently than a lot of our peers, and that's in the area of document management. We're using a cloud-based service for document management now. Where VMware View and VMware, as an organization, will benefit Foley as we move forward is probably more along the lines of the Horizon product, where we can pull our SaaS-based applications or on-premise based applications all together in a single portal.

It all looks the same to our users, it all opens and functions just as easily, while also being able to deliver single sign-on and two-factor authentication. Just pulling the whole desktop together that way is going to be real beneficial. Virtualizing the desktop, virtualizing our servers, those are key points in getting us to that destination.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Wednesday, August 22, 2012

VMware CTO Steve Herrod on how the software-defined datacenter benefits enterprises

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.
In advance of next week's VMworld conference in San Francisco, I recently sat down with Steve Herrod, Chief Technology Officer and Senior Vice President of Research & Development at VMware.

Our discussion hinges on the intriguing concept of the software-defined datacenter. We look at how some of the most important attributes of datacenter capabilities and performance are now squarely under the domain of software enablement.

A top technology leader at VMware, Herrod has championed this vision of the software-defined datacenter and how the next generation of foundational IT innovation is largely being implemented above the hardware.

For example, those who are now building and managing datacenters are gaining heightened productivity, delivering far better performance, and enjoying greater ease in operations and management -- all thanks to innovations at the software-infrastructure level.

Join the discussion here and further explore how advances in datacenter technologies and architecture are -- to an unprecedented extent -- being driven primarily through software. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: We've heard a lot over the decades about improving IT capabilities and infrastructure management, but it seems that many times we peel back a layer of complexity and we get some benefits, and we find ourselves like the proverbial onion, back at yet another layer of complexity.

Complexity seems to be a recurring inhibitor. I wonder if this time we're actually at a point where something is significantly different. Are we really gaining ground against complexity at this point?

Herrod: It’s a great question, because complexity is associated with IT and why we'll do it differently this time. I see two things happening right now that give us a great shot at this.

One is purely on expectations. All of the opportunities we have as consumers to work with cloud computing models have opened up our imagination as to what we should expect out of IT and computing datacenters, where we can sign up for things immediately, get things when we want them, and pay for what we use. All those great concepts have set our expectations differently.

A good shot

Simultaneously, a lot of changes on the technology side give us a good shot at implementing it. When you combine technology that we'll talk about with the loosened-up imagination on what can be, we're in a great spot to deliver the software-defined datacenter.

Gardner: You mentioned cloud and this notion that it’s a liberating influence. Is this coming from the technologists or from the business side? Is there a commingling on that concept quite yet?

Herrod: It’s funny. I see it coming from the business side, which is the expectation of an individual business unit launching a product. They now have alternatives to their own IT department. They could go sign up for some sort of compute service or software-as-a-service (SaaS) application. They have choices and alternatives to circumvent IT. That's an option they didn't have in the past.

Fundamentally, it comes down to each of us as individuals and our expectations. People are listening to this podcast when they want to, quickly downloading it. This also applies to signing up for email, watching movies, and buying an app on an app store. It's just expected now that you can do things far more agilely, far more quickly than you could in the past, and that's really the big difference.

Gardner: Tech users are getting higher expectations based on what they encounter on their consumer side of technology consumption. We see what the datacenters are capable of from the likes of Google and Facebook. Is it possible for enterprises to also project that sort of productivity and performance onto what they're doing, and maybe now that we've gone through an iteration of these vast datacenters, to do it even better?

Herrod: I have a lot of friends at Facebook, Zynga, and Google, running the datacenters there, and what’s exciting for me is that they have built a fully software-defined datacenter. They're doing a lot of the things we are talking about here. But there are two unique things about their datacenters.
When you go into the business world, they don't have legions of people to run the infrastructure.


One is that they have hundreds or even thousands of PhDs who are running this infrastructure. Second, they're running it for a very specific type of application. To run on the Google datacenter, you write your applications a very specific way, which is great for them. But when you go into the business world, they don't have legions of people to run the infrastructure, and they also have a broad set of applications that they can’t possibly consider rewriting.

So in many ways, I see what we're doing is taking the lesson learned in those software-defined datacenters, but bringing it to the masses, and bringing it to companies to run all of their applications and without all of the people cost that they might need otherwise.

Gardner: Let’s step back for some context. How did we get here? It seems that hardware has been sort of the cutting edge of productivity, when we think of Moore’s Law and we look at the way that storage, networks, and server architecture have come together to give us the speeds and feeds that have led to a lot of what we take for granted now. Let’s go through that a little bit and think about why we're at a point where that might not be the case anymore.

Herrod: I like to look at how we got to where we are. I think that's the key to understanding where we're likely to go from here.

History of IT decisions

W
e started VMware out of a university, where we could take the time to study history and look at what had happened. I liked looking at existing datacenters. You can look through the datacenter and see the history of IT decisions of the past.

It's traditionally been the case that a particular new need led the IT department to go out and buy the right infrastructure for that new need, whether it’s batch processing, client/server applications, or big web farms. But these individually made decisions ended up creating the silos that we all know about that exist all over datacenters.

They now have the group that manages the mainframe, the UNIX administration group, and the client PC group, and none of them is using common people or common tools as much as they certainly would like to. How we got to where we are were isolated decisions for the right thing at the right time, without recognizing the opportunity to optimize across a broader set of the datacenter.

The whole concept of software-defined datacenters is looking holistically at all of the different resources you have and making them equally accessible to a lot of different application types.

Gardner: Earlier, I used the metaphor of an onion. You peel back complexity and you get more. But when it comes to the architecture of datacenters, it seems that the right comparison might be a snowball, which is layered on another layer, or it has been rolling and gathering as it goes, but not rationalized, not looked at holistically.
Every single day you hear about a new case where a business unit or an employee is able to circumvent IT to scratch the itch they have for some particular type of technology.


Are there some sorts of imperatives now that are driving people to do that? We talked about the cloud vision, but maybe it’s security, maybe it’s the economics, maybe it’s the energy issues, or maybe it's all those things together.

Herrod: It’s a little of each. First of all, I like the onion analogy, because it makes you cry, and I think that’s also key. But it’s a combination of requirements coming in at the same time that's really causing people to look at it.

Going back to the original discussion, it starts with the fact that there are choices now. Every single day you hear about a new case where a business unit or an employee is able to circumvent IT to scratch the itch they have for some particular type of technology, whether it's using Dropbox instead of the file servers that the company has, buying their own device and bringing it in, or just signing up for Amazon EC2, instead of using their local datacenter. These are all examples of them being able to go around IT.

But what often happens subsequently is that, when a security problem happens, when you realize that you are not in compliance, IT is left holding the bag. So we get an environment here where the user demand can be handled other ways, but IT has to be able to compete with those.

We have to let IT be a service provider and be able to be as responsive with those, so that they can avoid people going around them. But they still need to be responsible to the business when it comes time to show that Sarbanes-Oxley (SOX) compliance is appropriate or to make sure that your customer records aren’t leaked out to everyone else on the Internet.

That unique balance between the user choice and IT control is something we've all seen over the last several decades, and it’s showing up again at an even larger state.

New competition


Gardner: As you pointed out, Steve, IT isn’t just competing against itself. That is to say, maybe a 5 percent or 10 percent improvement over how well it did last year will be viewed as very progressive. But they're competing now against other datacenter architects. Maybe it’s a SaaS provider, maybe it’s a cloud provider, maybe it’s managed service provider (MSP) or telco that's now offering additional services.

We're really up against this notion that if you don’t architect your datacenter with that holistic software-defined mentality, and someone else does that, you're in trouble.

Herrod: It’s a great point. There are rate cards now for what you can use something else for. You might pay 7 cents per hour for this, or "this much" per transaction. IT departments in general have not traditionally had a good way of, first, even knowing how much they are costing, but second, optimizing to be competitive. So there's this awareness now of how much I'm spending and how long it takes. These metrics are causing this.

Gardner: Let’s revisit the context and the history here, looking at virtualization in particular. We've seen it extend beyond servers to data, storage, and also networking. Is this part of what you've got in your vision of software defined? Is it strictly virtualization, or does it encompass more? Help me understand how you've progressed in your thinking along these lines, particularly in regard to virtualization?

Herrod: We'll step back a little bit. VMware, over the last 13 years or so, has done a very good job of completely optimizing how servers are used in the datacenter. You can provision a new virtual machine (VM) in seconds. The cost has gone down in orders of magnitude. We've really done a good job on the compute and memory aspect of a datacenter.
It's absolutely crucial to look at the breadth of things that are involved in the datacenter.


But as you said, a couple of things have to happen from there. It's absolutely crucial to look at the breadth of things that are involved in the datacenter. We talk to customers now, and often they say, "Great, you've just lowered the cost and time taken to provision a new server. But when I put this in production, by the way, I care what LUN it ends up on, I have to look at what VLAN is there, and if it's in the right section of my firewall setup."

It might take seconds to provision a VM, but then it takes five days to get the rest of the solutions around it. So we see, first of all, the need to get the entire datacenter to be as flexible and fast moving as the pure server components are right now.

Again, if you look at the last couple of years, I would rate the industry -- ourselves and others -- as moving forward quite well on the storage side of things. There are still some things to do for sure, but storage, for the most part, has gotten a good head start on being fully virtualized and automated.

The big buzz around the industry right now has been the recognition that the network is the huge remaining barrier to doing what you want in your datacenter. Plenty of startups and all kinds of folks are working on software-defined networking. In fact, that's what we use as the term for the software-defined datacenter, because as networking follows as this big inhibitor, you'll be opened up to having a truly planned datacenter solution in place.

Now, we can break that down a little bit. It's important to talk about the technology piece of this. But when I say software-defined, I really look at three phases of how software comes in and morphs this existing hardware that you have.

The first step

The first step is to abstract away what people are trying to use from how it is being implemented. That's the core of what virtual even means, separating the logical from the physical. It gives you hardware independence. It enables basic mobility and all sorts of other good things.

The second phase is when you then pool all of these abstracted resources into what we call resource pools. Anyone who uses VMware software knows that we create these great clusters of computing horsepower and we allow vMotion and mobility within it.

But you need to think about that same notion of aggregation of resources at the storage and networking levels, so they become this great pool of horsepower that you can then dole out quite effectively. So after you've abstracted and pooled, the final phase is how you now automate the handling of this. This is where the real savings and speed come from.

Once you have pools of resources, when a new request comes in, you should be able to allocate storage, security, networking, and CPU very quickly. Likewise, when it goes away, you should be able to remove it and put it back into the pool.

That's a bit of a mouthful, but that's how I see the expansion. It first goes from just compute into storage, networking, security, and the other parts of the datacenter. Then simultaneously, you're abstracting each of these resources, pooling them, and then automating them.
When a new request comes in, you should be able to allocate storage, security, networking, and CPU very quickly.


Gardner: What's really fascinating to me are the benefits you get by abstracting to a virtualization and software-defined level -- the ability to implement with greater ease -- but that comes with underlying benefits around operations and management.

It seems to me that you can start to dial up and down, demonstrate elasticity at a far greater level, almost at that data-center level, looking at the service-level agreements (SLAs) and the key performance indicators (KPIs) that you need to adhere to and defining your datacenter success through a business metric, like an SLA.

Does it ring true with you that we're talking about some real management and operational efficiencies, as well as implementation efficiencies?

Herrod: It is, Dana, and we talk about it a few different ways. The transformation of datacenters, as we got started, was all about cost savings and capital expenses in financial terms. Let's buy fewer servers. "Let's not build another datacenter."
Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.
But the second phase, and where most customers are today, is all about operational efficiency. Not only am I buying less hardware, but I can do things where I'm actually able to satisfy, as you said, the KPIs or the SLAs.

Doing even more


I
can make sure that applications are up and running with the level of availability they expect, with less effort, with fewer people, and with easier tools. And when you go from capital expense savings to operational improvements, you impact the ability for IT to do even more.

To take that one level further, whenever I hear people talk about cloud computing -- and everyone talks about this with all sorts of different impressions in mind -- I think of cloud as simply being about more speed. You can do something more quickly. You can expand something more quickly. And that's what this third phase after capital and operational savings is about, that agility to move faster.

As businesses’ success ties so closely to how IT does, the ability to move faster becomes your strategic weapon against someone else. Very core to all this is how can we operate more efficiently, while satisfying the specific needs of applications in this new datacenter.

Gardner: Another area that I hear about benefiting from this software defined datacenter is the ability to better reduce and manage risk, particularly around security issues. You're no longer dealing with multiple parties, like the group overseeing UNIX, the group overseeing PC, the group doing the x86 architectures. The likelihood for process cracks to develop and security issues to unfortunately crop up seem to be more likely under those circumstances.

But when you have got a more organized overview of management operations and architecting at a similar level, you can instantiate the best practices around security. Please address this issue of security as another fruit to be harvested from a software-defined datacenter.
Security means a lot of different things, and it has been affected by a number of different aspects.


Herrod: Security means a lot of different things, and it has been affected by a number of different aspects.

First of all, I agree that the more you can have a homogenous platform or a homogenous team working on something, the less variation and process you end up with, exactly as you said, Dana. That can allow you to be more efficient.

This is a replacement for the traditional world of ITIL, where they had to try to create some standard across very different back ends. That's a natural progression for getting rid of some of the human errors that come into problems.

A more foundational thing that I am excited about with the software-defined datacenter is how, rather than security being these physical concepts that are deployed across the datacenter today, you can really think of security logically as wrapping up your application. You can do some pretty interesting new things.

A quick segue on that -- the way most security works in datacenters today is through statically placed appliances, whether they're firewalls, intrusion detection, or something else. Then the onus is on you to fit your application in the right part of the datacenter to get the right level of protection that you have, and hopefully it doesn’t move out of that protection zone.

Follows the application

What we're able to deliver with the software-defined datacenter is a way that security is a trait associated with the application, and it essentially wraps and follows the application around. You've virtualized your firewall and you've built it into the fabric of how you're automating deployments. I see that as a way to change the game on how tight the security can be around an application, as well as making sure it's always around there when you deploy it.

Gardner: For end users the proof is in how they actually consume, relate to, and interact with the applications. Is there something about the applications specifically that the software-defined datacenter brings, a higher level of user productivity benefits? What's really going to be noticeable for the application level to end users?

Herrod: That's a great question. I'm an infrastructure guy, as are probably many people listening here, and it’s easy to forget that infrastructure is simply a means to an end. It's the way that you run applications that ultimately matters. So you have to look at what an application is and what its ideal state looks like. The idea of the software-defined datacenter is to optimize that application experience.

That very quickly translates into how quickly can I get my application from the time I want it until it's running. It dictates how often this application is up, what kind of scale it can handle as more people come in, and how secure it is. Ultimately, it's about the application. I believe the software-defined datacenter is the way to optimize that application experience for all the users.

Gardner: Steve, how about not just repaving cow paths in terms of how we deploy existing types of applications. Is there something inherent in a software-defined datacenter benefit that will work to our advantage on innovative new types of applications?
We are at a point where, depending on who you listen to, about 60 percent of all server applications are running virtual.


They could be for high performance computing, big data and analytics, or even when we go to mobile and we have location services folded into some of the way that applications are served up, and there is sort of a latency sensitive portion to this. Are there new types of apps that will benefit from this software-defined architecture?

Herrod: This is one of the most profound parts, if we get it right. I've been talking about can we collapse the silos that were created. Can we get all of our existing apps onto this common platform? We're doing quite well on that. We are at a point where, depending on who you listen to, about 60 percent of all server applications are running virtual, which is pretty amazing. But that also means there is 40 percent that aren’t. So I spend a lot of time understanding why they might not be today.

Part of it is that just as businesses get more comfortable and get there, their business critical apps will get onto the system, and that's working well. But there are applications that are emerging, as you talked about, where if we're not careful, they'll create the next generation of silos that we'll be talking about 10 years from now.

I see this all the time. I'll visit a company that has a purely virtualized pool, but they have also created their grid for doing some sort of Monte Carlo simulations or high-performance computing. Or they have virtualized everything except for their unified communication environment, which has a special team and hardware allocated to it.

We spend quite a bit of time right now looking at the impediments to having those run on top of virtualization, which might be performance related or something else. Then going beyond impediments to how can we make them even better when they are run on top of the virtualized platform.

Great applications


Some of the really interesting things we're able to show now with our partners are things I would have never dreamed of as great candidates when we started the company. But we're able to satisfy very strict real-time requirements, which means we can run some great applications used in various sorts of stock trading, but also used in things like voice over IP (VoIP) or video conferencing.

Another big area that's liable to create the next round of silos, if we're not careful, is the big data and Hadoop world. Lots of customers are kicking the tires and creating special clusters and teams to work on that. But just recently, we've shown that the performance of Hadoop on top of vSphere, our virtualization platform, can be great.

We can even show that we can make it far easier to set up. We can make Hadoop more available, meaning it won’t crash as often. And we can even do things where we make it more elastic than it already is. It can suck up as many resources in the software-defined datacenter as it wants, when it needs them, but it can also give them all back when it's not using them.

It’s really exciting to look across all these apps. At this point, I don’t see a reason why we can't get almost any type app that we're looking at today to fit into the software-defined datacenter model.

Gardner: That’s exciting, when we don’t have any of the stragglers or large portions of business functions that are cast off. It seems to me that we've reached the capability of mirroring the entire datacenter, whether it’s for purposes of business continuity or disaster recovery (DR), or backup and recovery. It gives us the choice of where to locate these resources, not at the individual server, virtual machine level, or application level, but really to move the whole darn datacenter, if that’s important, without a penalty.
Very rapidly, this notion of DR has been a driving reason for people to virtualize their datacenter.


For our last blue-sky direction with this conversation, are we at the point where we have fungibility, if you will, of datacenters, or are we getting to that point in the near future, where we can decide at a moment’s notice where we're going to actually put our datacenter, almost location independent?

Herrod: It’s a ways out, before we're just casually moving datacenters around, for sure. But I have seen some use cases today that are showing what's possible, and maybe I'll just give you a couple of examples.

DR has long been one of the real pains for IT to deal with. They have to replicate things across the country and keep two datacenters completely in sync, literally the same hardware, the same firmware layer, and all of that that goes into it.

Very rapidly, this notion of DR has been a driving reason for people to virtualize their datacenter. We have seen many cases now, where you're able to failover your entire datacenter, effectively copying the whole datacenter over to another one, keeping the logical constructs in place, but hosting in a completely different area.

To get that right, your storage needs to be moved, your network identities need to be updated, and those are things that you can script and do in an automated way, once you've virtualized the whole datacenter.

Fun example


A
nother really fun example I see more and more now is, as mergers and acquisitions happen, we've seen several cases where one company buys another. They both had fully virtualized their datacenter and they could put on a giant storage drive the datacenter at one company and begin to bring it up on the other side, once they copied it over there.

So the entire datacenter isn't moved yet, but I think there are clear indications of once you separate out where something runs and how it runs from what you are really after, it opens up the door for a lot of different optimizations.

Gardner: We're coming up on the end of our time, but we also have the big annual VMworld show in San Francisco coming up toward the end of August. I know you can’t pre-announce anything, but perhaps you can give us some themes. We've talked about a lot of things here today, but is there any particular themes that we have hit on that you think are going to be more impactful or more important in terms of what we should expect at VMworld?

Herrod: It will be exciting as always. We have more than 20,000 people expected. What I'm doing here is talking about a vision and generalities of what's happening, but you can certainly imagine that what we will be showing there will be the realities -- the products that prove this, the partnerships that are in place that can help bring it forward, and even some use cases and some success stories.
You need to get to the point where you are leveraging the full automation and mobility that exists today.


So expect it to be certainly giving more detail around this vision and making it very real with announcements and demonstrations.

Gardner: Last question, if I'm a listener here today, I'm intrigued, and I want to start thinking about the datacenter at the software-defined level in order to generate some of the benefits that we have been discussing and some of the vision that we have been painting, what’s a good way to start? How do you begin this process? What are a few foundational directives or directions that you recommend?

Herrod: I think it can sound very, very disruptive to create a new software-defined datacenter, but one of the biggest things that I have been excited about in this technology versus others is that there are a set of steps that you go through, where you're able to get some value along the way, but they are also marching you toward where you ultimately end up.

So to customers who are doing this, presumably most of you have done some basic virtualization, but really you need to get to the point where you are leveraging the full automation and mobility that exists today.

Once you start doing that, you'll find that it obviously is showing you where things can head. But it also changes some of the processes you use at the company, some of the organizational structures that you have there, and you can start to pave the way for the overall datacenter to be virtualized, as you take some of these initial steps.

It’s actually very easy to get started. You can make benefits along the way. Your existing applications and hardware work. So that would be my real entreaty -- use what exists today and get your feet wet, as we deliver the next round heading forward.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.
Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.
You may also be interested in:

Tuesday, August 21, 2012

New levels of automation and precision needed to optimize backup and recovery in virtualized environments

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: Quest Software.

The benefits of server virtualization are clear for many more companies now as they reach higher percentages of workloads supported by virtual machines (VMs). But the complexity impacts on other IT functions can also ramp up quickly, in some cases jeopardizing the overall benefits.

When it come to the relationship between increasingly higher levels of virtualization and the need for new data backup and recovery strategies, for example, the impacts can be multiplier of improvement when both are done properly and in context to one another.

The next BriefingsDirect enterprise IT discussion then focuses on how virtualization provides an excellent on-ramp to improved data lifecycle benefits and efficiencies. What's more, the elevation of data to the lifecycle efficiency level also forces a rethinking of the culture of data, of who owns data, and when, and who is responsible for managing it in a total lifecycle.

This is different from the previous and current system of data management as a fragmented approach, with different oversight for data across far-flung instances and uses.

Here to share insights on where the data availability market is going -- and how new techniques are being adopted to make the value of data ever greater -- we're joined by John Maxwell, Vice President of Product Management for Data Protection, at Quest Software. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Why has server virtualization become a catalyst to data modernization?

Maxwell: I think it’s a natural evolution, and I don’t think it was even intended on the part of the two major hypervisor vendors, VMware and Microsoft with their Hyper-V. As we know, five or 10 years ago, virtualization was touted as a means to control IT costs and make better use of servers.

Utilization was in single digits, and with virtualization you could get it much higher. But the rampant success of virtualization impacted storage and the I/O where you store the data.

Upped the ante

I
f you look at the announcements that VMware did around vSphere 5, around storage, and the recent launch of Windows Server 2012, Hyper-V, where Microsoft even upped the ante and added support for Fibre Channel with their hypervisor, storage is at the center of the virtualization topic right now.

It brings a lot of opportunities to IT. Now, you can separate some of the choices you make, whether it has to do with the vendors that you choose or the types of storage, network-attached storage (NAS), shared storage and so forth. You can also make the storage a lot more economical with thin disk provisioning, for example.

There are a lot of opportunities out there that are going to allow companies to make better utilization of their storage, just as they've done with their servers. It’s going to allow them to implement new technologies without necessarily having to go out and buy expensive proprietary hardware.

From our perspective, the richness of what the hypervisor vendors are providing in the form of APIs, new utilities, and things that we can call on and utilize, means there are a lot of really neat things we can do to protect data. Those didn't exist in a physical environment.

It’s really good news overall. Again, the hypervisor vendors are focusing on storage, and so are companies like Quest when it comes to protecting that data.

Gardner: What is it about data that people need to think differently about?
First of all, people shouldn’t get too complacent.


Maxwell: First of all, people shouldn’t get too complacent. We've seen people load up virtual disks, and one of the areas of focus at Quest, separate from data protection, is in the area of performance monitoring. That's why we have tools that allow you to drill down and optimize your virtual environment from the virtual disks and how they're laid out on the physical disks.

And even hypervisor vendors -- I'm going to point back to Microsoft with Windows Server 2012 -- are doing things to alleviate some of the performance problems people are going to have. At face value, your virtual disk environment looks very simple, but sometimes you don’t set it up or it’s not allocated for optimal performance or even recoverability.

There's a lot of education going on. The hypervisor vendors, and certainly vendors like Quest, are stepping up to help IT understand how these logical virtual disks are laid out and how to best utilize them.

See it both ways

A t face value, virtualization makes it really easy to go out and allocate as many disks as you want. Vendors like Quest have put in place solutions that make it so that within a couple of mouse clicks, you can expose your environment, all your virtual machines (VMs) that are out there, and protect them pretty much instantaneously.

From that aspect, I don't think there needs to be a lot of thought, as there was back in the physical days, of how you had to allocate storage for availability. A lot of it can be taken care of automatically, if you have the right software in place.

That said, a lot of people may have set themselves up, if they haven’t thought of disaster recovery (DR), for example. When I say DR, I also mean failover of VMs and the like, as far as how they could set up an environment where they could ensure availability of mission-critical applications.

For example, you wouldn’t want to put everything, all of your logical volumes, all your virtual volumes, on the same physical disk array. You might want to spread them out, or you might want to have the capabilities of replicating between different hypervisor, physical servers, or arrays.

Gardner: I understand that you've conducted a survey to try to find out more about where the market is going and what the perceptions are in the market. Perhaps you could tell us a bit about the survey and some of the major findings.
Our survey showed that 70 percent of organizations now consider at least 50 percent of their data mission critical.


Maxwell: One of the findings that I find most striking, since I have been following this for the past decade, is that our survey showed that 70 percent of organizations now consider at least 50 percent of their data mission critical.

That may sound ambiguous at first, because what is mission critical? But from the context of recoverability, that generally means data that has to be recovered in less than an hour and/or has to be recovered within an hour from a recovery-point perspective.

This means that if I have a database, I can’t go back 24 hours. The least amount of time that I can go back is within an hour of losing data, and in some cases, you can’t go back even a second. But it really gets into that window.

I remember in the days of the mainframe, you'd say, "Well, it will take all day to restore this data, because you have tens or hundreds of tapes to do it." Today, people expect everything to be back in minutes or seconds.

The other thing that was interesting from the survey is that one-third of IT departments were approached by their management in the past 12 months to increase the speed of the recovery time. That really dovetails with the 50 percent of data being mission critical. So there's pressure on the IT staff now to deliver better service-level agreements (SLAs) within their company with respect to recovering data.

Terms are synonymous

The other thing that's interesting is that data protection and the term backup are synonymous. It's funny. We always talk about backup, but we don't necessarily talk about recovery. Something that really stands out now from the survey is that recovery or recoverability has become a concern.

Case in point: 73 percent of respondents, or roughly three quarters, now consider recovering lost or corrupted data and restoring those mission critical applications their top data-protection concern. Only 4 percent consider the backup window the top concern. Ten years ago, all we talked about was backup windows and speed of backup. Now, only 4 percent considered backup itself, or the backup window, their top concern.

So 73 percent are concerned about the recovery window, only 4 percent about the backup window, and only 23 percent consider the ability to recover data independent of the application their top concerns.

Those trends really show that there is a need. The beauty is that, in my opinion, we can get those service levels tighter in virtualized environments easier than we can in physical environments.

Gardner: What's the relationship between moving toward higher levels of virtualization and cutting costs?.
A company has to look at which policies or which solutions to put in place to address the criticality of data, but then there is a cost associated with it.


Maxwell: You have to look at a concept that we call tiered recovery. That's driven by the importance now of replication in addition to traditional backup, and new technology such as continuous data protection and snapshots.

That gets to what I was mentioning earlier. Data protection and backup are synonymous, but it's a generic term. A company has to look at which policies or which solutions to put in place to address the criticality of data, but then there is a cost associated with it.

For example, it's really easy to say, "I'm going to mirror 100 percent of my data," or "I'm going to do synchronous replication of my data," but that would be very expensive from a cost perspective. In fact, it would probably be just about unattainable for most IT organizations.

Categorize your data

What you have to do is understand and categorize your data, and that's one of the focuses of Quest. We're introducing something this year called NetVault Extended Architecture (NetVault XA), which will allow you to protect your data based on policies, based on the importance of that data, and apply the correct solution, whether it's replication, continuous data protection, traditional backup, snapshots, or a combination.

You can't just do this blindly. You have got to understand what your data is. IT has to understand the business, and what's critical, and choose the right solution for it.
What we see now are the traditional people who were responsible for physical storage taking over the responsibility of virtual storage.


... Because of the mission criticality of data, they're going from being people who looked at data as just a bunch of volumes or arrays, logical unit numbers (LUNs), to "these are the applications and this is the service level associated with the applications."

When they go to set up policies, they are not just thinking of, "I'm backing up a server" or "I'm backing up disk arrays,", but rather, "I'm backing up Oracle Financials," "I'm backing up SAP," or "I'm backing up some in-house human resources application."

Adjust the policy

And the beauty of where Quest is going is, what if those rules change? Instead of having to remember all the different disk arrays and servers that are associated with that, say the Oracle Financials, I can go in and adjust the policy that's associated with all of that data that makes up Oracle Financials. I can fine-tune how I am going to protect that and the recoverability of the data.

Gardner: How do we look at this shift and think about extending that policy-driven and dynamic environment at the practical level of use?

Maxwell: With the increased amount of virtual data out there, which just adds to the whole pot of heterogeneous environments, whether you have Windows and Linux, MySQL, Oracle, or Exchange, it's impossible for these people who are responsible for the protection and the recoverability of data to have the skills needed to know each one of those apps.

We want to make it as easy to back up and recover a database as it is a flat file. The fine line that we walk is that we don't want to dumb the product down. We want to provide intuitive GUIs, a user experience that is a couple of clicks away to say, "Here is a database associated with the application. What point do I want to recover to?" and recover it.

If there needs to be some more hands-on or more complicated things that need to be done, we can expose features to maybe the database administrator (DBA), who can then use the product to do more complex recovery or something to that effect.
It's impossible for these people who are responsible for the protection and the recoverability of data to have the skills needed to know each one of those apps.


We've got to make it easy for this generalist, no matter what hypervisor -- Hyper-V or VMware, a combination of both, or even KVM or Xen -- which database, which operating system, or which platform.

Again, they're responsible for everything. They're setting the policies, and they shouldn't have to be qualified. They shouldn't have to be an Exchange administrator, an Oracle DBA, or a Linux systems administrator to be able to recover this data.

We're going to do that in a nice pretty package. Today, there are many people here at Quest who walk around with a tablet PC as much as they do with their laptop. So our next-generation user interface (UI) around NetVault XA is being designed with a tablet computing scenario, where you can swipe data, and your toolbar is on the left and right, as if you are holding it using your thumb -- that type of thing.

Gardner: Are there any other technology approaches that Quest is involved with that further explain how some of these challenges can be met?.
We're envisioning customer environments where they're going to have multiple hypervisors, just as today people have multiple operating system databases.


Maxwell: There are two things I want to mention. Today, Quest protects VMware and Microsoft Hyper-V environments, and we'll be expanding the hypervisors that we're supporting over the next 12 months. Certainly, there are going to be a lot of changes around Windows Server 2012 or Hyper-V, where Microsoft has certainly made it a lot more robust.

There are a lot more things for us exploit, because we're envisioning customer environments where they're going to have multiple hypervisors, just as today people have multiple operating system databases.

We want to take care of that, mask some complexity and allow people to possibly have cross-hypervisor recoverability. So, in other words, we want to enable safe failover of a VMware ESXi system to Microsoft Hyper-V, or vice versa..

There's another thing that’s interesting and is a challenge for us and it's something that has challenged engineers here at Quest. This gets into the concepts of how you back up or protect data differently in virtual environments. Our vRanger product is the market leader with more than 40,000 customers, and it’s completely agentless.

As we have evolved the product over the past seven years, we've had three generations of the product and have exploited various APIs. But with vRanger, we've now gone to what is called a virtual appliance architecture. We have a vRanger service that performs backup and replication for one or hundreds of VMs that exist either on that one physical server or in a virtual cluster. So this VM can even protect VMs that exist on other hardware.

Scalability

The beauty of this is first the scalability. I have one software app that’s running that’s highly controllable. You can control what resources are replicating, protecting, and recovering all of my VMs. So that’s easy to manage, versus having to have an agent installed in every one of those VMs.

Two, there's no overhead. The VMs don’t even know, in most cases, that a backup is occurring. We use the services, in the case of VMware, of ESXi, that allows us to go out there, snapshot the virtual volumes called VMDKs, and back up or replicate the data.

There’s a service in Windows called Volume Shadow Copy Service, or VSS for short, and one of the unique things that Quest does with our backup software is synchronize the virtual snapshot of the virtual disks with the application of VSS, so we have a consistent point-in-time backup.

To communicate, we dynamically inject binaries into the VM that do the process and then remove themselves. So, for a very short time, there's something running in that VM, but then it's gone, and that allows us to have consistent backup.
One of the beauties of virtualization is that I can move data without the application being conscious of it happening.


That way, from that one image backup that we've done, I can restore an entire VM, individual files, or in the case of Microsoft Exchange or Microsoft SharePoint, I can recover a mailbox, an item, or a document out of SharePoint.

Replicate data


W
e replicate data amongst various Quest facilities. Then, we can bring up an application that was running in location A in point B, on unlike hardware. It can be completely different storage, completely different servers, but since they're VMs, it doesn’t matter.

That kind of flexibility that virtualization brings is going to give every IT organization in the world the type of failover capabilities that used to only exist for the Global 1000, where they used to have to set up a hot site or had to have a data center. They would use very expensive proprietary hardware-based replication and things like that. So you had to have like arrays, like servers, and all that, just to have availability.

Now, with virtualization, it doesn’t matter, and of course, we have plenty of bandwidth, especially here in the United States. So it’s very economical, and this gets back to our survey that showed that for IT organizations, 73 percent were concerned about recovering data, and that’s not just recovering a file or a database.

Two years ago, when people talked about cloud and data protection, it was just considering the cloud as a target. I would back up the cloud or replicate the cloud. Now, we are talking about actually putting data protection products in the cloud, so you can back up the data locally within the cloud and then maybe even replicate it or back it up back to on-prem, which is kind of a novel concept if you think about it.

If you host something up in cloud, you can back it up locally up there and then actually keep a copy on-prem. Also, the cloud is where we're certainly looking at having generic support for being able to do failover into the cloud and working with various service providers where you can pre-provision, for example, VMs out there.

You're replicating data. You sense that you have had a failure, and all you have to do is, via software, bring up those VMs, pointing them at the disk replicas you put up there.

Different cloud providers

Then, there's the concept of what you do if a certain percentage of all your IT apps are hosted in cloud by different cloud providers. Do you want to be able to replicate the data between cloud vendors? Maybe you have data that's hosted at Amazon Web Services. You might want to replicate it to Microsoft Azure or vice versa or you might want to replicate it on-premise (on-prem).

So there's going to be a lot of neat hybrid options. The hybrid cloud is going to be a topic that we're going to talk about a lot now, where you have that mixture of on-prem, off-prem, hosted applications, etc., and we are preparing for that.

Gardner: Are there some best practices you've seen in the market about how to go about this, or at least to get going?.
The cloud is where we're certainly looking at having generic support for being able to do failover into the cloud.


Maxwell: The number one thing is to find a partner. At Quest, we have hundreds of technology partners that can help companies architect a strategy utilizing the Quest data protection solutions.

Again, choose a solution that hits all the key points. In the case of VMware, you can go to VMware’s site and look for VMware Ready-Certified Solutions. Same thing with Microsoft, whether it’s Windows Server 2008 or 2012 certified. Make sure that you are getting a solution that’s truly certified. A lot of products say they support virtual environments, but then they don’t have that real certification, and a result, they can’t do lot of the innovative things that I’ve been talking about .

So find a partner who can help, or, we at Quest can certainly help you find someone who can help you architect your environment and even implement the software for you, if you so choose. Then, choose a solution that is blessed by the appropriate vendor and has passed their certification process.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: Quest Software.

You may also be interested in:

New Embarcadero AppWave for ISVs provides end-to-end mobile app experience for desktop PCs

Embarcadero Technologies today announced the availability of a new version of the AppWave business platform and PC app store designed to help independent software vendors (ISVs) improve their customer experience and drive new revenue growth.

AppWave for ISVs expands on the Embarcadero experience with AppWave for enterprise customers and their end-users. AppWave simplifies and expedites how desktop PC software is marketed, sold, delivered, tracked, and maintained, allowing existing client-server apps to be delivered more as a service. [Disclosure: Embarcadero Technologies is a sponsor of BriefingsDirect podcasts.]

Available via a free download, the AppWave platform gives users access to more than 250 free PC productivity apps for general business, marketing, design, data management, and development including OpenOffice, Adobe Acrobat Reader, 7Zip, FileZilla, and more, said Embarcadero, based in San Francisco.

AppWave users also can add internally developed and commercial software titles, such as Adobe Creative Suite products and Microsoft Visio, for on-demand access, control, and visibility into software titles they already own. Customer apps can also be converted for use on AppWave, via AppWave Studio tools.

Easily acquired apps

The AppWave platform converts valued, but often cumbersome business software into easily consumed and acquired "apps," so business users don't have to wait in line for IT to order, install, and approve the work tools that they really need without delay.

With AppWave, companies have a consumer-like app experience with the software they commonly use. With rapid, self-service access to apps, and real-time tracking and reporting of software utilization, the end result is a boost in productivity and lowering of software costs. Pricing to enable commercial and custom software applications to run as AppWave apps starts at $10 to $400 per app.

SaaS and the consumerization of IT have changed the way software is acquired and used.



There's a dynamic shift under way in the PC software market. Software as a service (SaaS) and the consumerization of IT have changed the way software is acquired and used. Vendors such as Salesforce.com and Google originated in the cloud, and Microsoft is moving its Office franchise to cloud-based streaming. Even enterprise IT is transitioning to a services-delivery model. End-users now expect the flexibility of mobile apps on their desktops, and ISVs must look for ways to meet these evolving demands.

To me, ISVs need to play both offense and defense in this new climate. They need to provide their customers ongoing value in the existing apps and delivery models, while also providing a path to the future, especially for mobile tier deleivery. AppWave for ISVs helps provide more runway for existing code, while setting up an on-ramp to the pure SaaS and cloud models. This is a revenue path too, allowing for licenses to continue from existing apps, even as services revenues develop.

What's more, using AppWave, ISVs can show their customers how the total costs of supporting apps goes down, via more precise payments based on actual use. So the AppWave path also works for enterprises as they need to transition to a services and pay-as-you-go model.

I can easily see too where Embarcadero may create it's own (or a partner network) to deliver AppWave apps from a cloud. This would be a "push" strategy for ISVs, and a "pull" strategy for enterprises as end-users could discover and procure apps with out the need for much IT involvement. Think of it as a vending machine for software, either from the corporate network or the cloud. For now, however, the AppWave use is for on-premises apps in enterprises.

“As a software vendor, we developed AppWave’s unique application delivery and consumption experience for our own enterprise customers,” said Wayne Williams, CEO of Embarcadero Technologies. “We’ve seen the benefits in revenue growth and customer satisfaction, and we believe other software vendors can emulate that success using AppWave for ISVs.”

AppWave for ISVs

Among the benefits in the new version of AppWave for ISVs:
  • ISVs can deliver new capabilities to market much sooner using AppWave’s frictionless “push” delivery model, keeping customers always up to date. They can instantly offer a full portfolio of on-demand applications, including beta and trial versions as well as updates and upgrades, without recoding or modifying existing products.

  • Through AppWave, ISVs can provide entire product portfolios on site at an enterprise for end-user review, trial, and use.

  • AppWave assists ISVs through new and expanded licensing opportunities. New users are acquired through peer referrals and on-demand trials. ISVs can also take advantage of integrated promotional capability including streaming banners and automated electronic direct mail to cross-sell additional applications, up-sell instant upgrades to fuller featured versions of products already being used, and offer other services designed to drive higher levels of customer retention, including extended service contracts.
User benefits

Customers and end users can also benefit from the new version's feature, including:
  • App discovery – Smart AppLinks and powerful search capabilities lead users to the right apps, delivering increased productivity and satisfaction.

  • App broadcast and socialization – Apps are published and streamed on-demand from AppWave, eliminating lagtime and the burden on enterprise IT.

  • App streaming – Desktop PC software is transformed into zero-install, zero-footprint apps that stream from public or private clouds.
You may also be interested in: