Thursday, June 10, 2010

HP BTO executive on how cloud service automation aids visibility and control over total management lifecycles

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

The latest BriefingsDirect executive interview centers on gaining visibility and control into the IT services management lifecycle while progressing toward cloud computing. We dig into the Cloud Service Automation (CSA) and lifecycle management market and offerings with Mark Shoemaker, Executive Program Manager, BTO Software for Cloud at HP.

As cloud computing in its many forms gains traction, higher levels of management complexity are inevitable for large enterprises, managed service providers (MSPs), and small-to-medium sized businesses (SMBs). Gaining and keeping control becomes even more critical for all these organizations, as applications are virtualized and as services and data sourcing options proliferate, both inside and outside of enterprise boundaries.

More than just retaining visibility, however, IT departments and business leaders need the means to fine-tune and govern services use, business processes, and the participants accessing them across the entire services ecosystem. The problem is how to move beyond traditional manual management methods, while being inclusive of legacy systems to automate, standardize, and control the way services are used.

We're here with HP's Shoemaker examine an expanding set of CSA products, services, and methods designed to help enterprises exploit cloud and services values, while reducing risks and working toward total management of all systems and services. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Shoemaker: When we talk about management, it starts with visibility and control. You have to be able to see everything. Whether it’s physical or virtual or in a cloud, you have to be able to see it and, at some point, you have to be able to control its behavior to really benefit.

Once you marry that with standards and automation, you start reaping the benefits of what cloud and virtualization promise us. To get to the new levels of management, we’ve got to do a better job.

Up until a few years ago, everything in the data center and infrastructure had a physical home, for the most part. Then, virtualization came along. While we still have all the physical elements, now we have a virtual and a cloud strata that actually require the same level of diligence in management and monitoring, but it moves around.

Where we're used to having things connected to physical switches, servers, and storage, those things are actually virtualized and moved into the cloud or virtualization layer, which makes the services more critical to manage and monitor.

All the physical things

Cloud doesn’t get rid of all the physical things that still sit in data centers and are plugged in and run. It actually runs on top of that. It actually adds a layer, and companies want to be able to manage the public and private side of that, as well as the physical and virtual. It just improves productivity and gets better utilization out of the whole infrastructure footprint.

I don’t know many IT shops that have added people and resources to keep up with the amount of technology they have deployed over the last few years. Now, we're making that more complex.

They aren't going to get more heads. There has to be a system to manage it. The businesses are going to be more productive, the people are going to be happier, and the services are going to run better.

We're looking at a more holistic and integrated approach in the way we manage. A lot of the things we're bringing to bear -- CSA, for example -- are built on years of expertise around managing infrastructures, because it’s the same task and functions.

Ensuring the service level

We’ve expanded these [products and services] to take into account the public cloud ... . We've been able to point these same tools back into a public cloud to see what’s going on and making sure you are getting what you are paying for, and getting what the business expects.

CSA products and services are the product of several years of actually delivering cloud. Some of the largest cloud installations out there run on HP software right now. We listened to what our customers would tell us and took a hard look at the reference architecture that we created over those years that encompassed all these different elements that you could bring to bear in a cloud and started looking, how to bring that to market and bring it to a point where the customer can gain benefit from it quicker.

We want to be able to come in, understand the need, plug in the solution, and get the customer up and running and managing the cloud or virtualization inside that cloud as quickly as possible, so they can focus on the business value of the application.

The great thing is that we’ve got the experience. We’ve got the expertise. We’ve got the portfolio. And, we’ve got the ability to manage all kinds of clouds, whether, as I said, it’s infrastructure as a service (IaaS) or platform as a service (PaaS) that your software's developed on, or even a hybrid solution, where you are using a private cloud along with a public cloud that actually bursts up, if you don’t want to outlay capital to buy new hardware.

We have the ability, at this point, to tap into Amazon’s cloud and actually let you extend your data center to provide additional capacity and then pull it back in on a per-use basis, connected with the rest of your infrastructure that we manage today.

A lot of customers that we talk to today are already engaged in a virtualization play and in bringing virtualization into their data centers and putting on top of the physical.



We announced CSA on May 11, and we're really excited about what it brings to our customers ..., industry-leading products together with solutions that allow you to control, build, and manage a cloud.

We’ve taken the core elements. If you think about a cloud and all the different pieces, there is that engine in the middle, resource management, system management, and provisioning. All those things that make up the central pieces are what we're starting with in CSA.

Then, depending on what the customer needs, we bolt on everything around that. We can even use the customers’ investments in their own third-party applications, if necessary and if desired.

As the landscape changes, we're looking at how to change our applications as well. We have a very large footprint in the software-as-a-service (SaaS) arena right now where we actually provide a lot of our applications for management, monitoring, development, and test as SaaS. So, this becomes more prevalent as public cloud takes off.

Also, we're looking at what’s going to be important next. What are going to be the technologies and the services that our customers are going to need to be successful in this new paradigm.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

You may also be interested in:

HP service aims to lower cost and risk by tackling vulnerabilities early in 'devops' cycle

Security breaches and the cost of repairing and patching enterprise applications hang like a cloud over every company doing business today. HP is taking direct aim at that problem today with release of a security service that aims to prevent vulnerabilities and to bake security and reliability in at the earliest stages of application design and architecture.

Part of HP's Secure Advantage, the Comprehensive Applications Threat Analysis (CATA) service provides architectural and design guidance alongside recommendations for security controls and best practices. By addressing and eliminating application vulnerabilities as early in the lifecycle as possible, companies stand to gain incredible returns on investment (ROI) and drastically lower total cost of ownership (TCO) across the "devOps" process, according to HP. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

"Customers are under increasing pressure from threats that exploit security weaknesses that were either missed or insufficiently addressed during the early lifecycle phases," said Chris Whitener, chief security strategist of Secure Advantage. Whitener added that he believes HP is the first company to come to market with such a service.

HP has been using this service internally for more than six years and, according to Whitener, has seen a return of 5- 20-times on the cost of implementation. And this, he says, is just on things that can be measured. The service has freed up a lot of schedule time formerly spent in finding and fixing application vulnerabilities.

Two problems

Many other risk-analysis programs come later in the development process, meaning that developers often miss vulnerabilities at the earliest stages of design. That brings up two problems, according to John Diamant, HP's Secure Product Development strategist, the risks associated with the vulnerabilities and the cost of patching the software.

"By addressing these vulnerabilities early in the process," Diamant said, "we're able to reduce the risk and eliminate the cost of repair."

The new service offers two main thrusts for increased security:
  • A gap analysis to examine applications and identify often-missed technical security requirements imposed by laws, regulations, or best practices.
  • An architectural threat analysis, which identifies changes in application architecture to reduce the risk of latent security defects. This also eliminates or lowers costs from security scans, penetration tests, and other vulnerability investigations.
While lowering development costs, using a security service early in the lifecycle can also lower the threat of security breaches, which can cost in the millions of dollars in fines and penalties, as well as the fallout in a loss of customer confidence.

Security and proper applications development, of course, come into particular focus when cloud computing models and virtualization are employed, and where an application is expected to scale dramatically and dynamically.

Although HP plans to develop a training program sometime in the future, right now, this is offered as a service using HP personnel who have been schooled in the processes and who have been using it inside HP for years. For more information, go to http://h10134.www1.hp.com/services/applications-security-analysis/.

You may also be interested in:

Wednesday, June 9, 2010

Adopting cloud-calibre security now pays dividends across all IT security concerns

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. View the video. Sponsor: Akamai Technologies.

To view a full video of the panel discussion on cloud-based security, please go to the registration page.

Today's headlines point to more sophisticated and large-scale and malicious online activities. For some folks, therefore, the consensus seems to be that the cloud computing model and vision are not up to the task when it comes to security.

But at the RSA Conference earlier this year, a panel came together to talk about security and cloud computing, to examine the intersection of cloud computing, security, Internet services, and Internet-based security practices to uncover differences between perceptions and reality.

The result is a special sponsored BriefingsDirect podcast and video presentation that takes stock of cloud-focused security -- not just as a risk, but also as an amelioration of risk across all aspects of IT.

Join panelists Chris Hoff, Director of Cloud and Virtualization Solutions at Cisco Systems; Jeremiah Grossman, the founder and Chief Technology Officer at WhiteHat Security, and Andy Ellis, the Chief Security Architect at Akamai Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Grossman: An interesting paradigm shift is happening. When you look at website attacks, things haven't changed much. An application that exists in the enterprise is the same application that exists in the cloud. For us, when we are attacking websites and assessing their security, it doesn't really matter what infrastructure it's actually on. We break into it just as same as everything else.

Our job, in the website vulnerability management business, is to find those vulnerabilities ahead of time and help our customers fix those issues before they become larger problems. And if you look at any security report on the Web right now, as far as security goes, it's a web security world.

What's different [with cloud] among our customer base is that they can't run to their comfort zone. They can't run to secure their enterprise with firewalls, intrusion detection systems, and encryption. They have to focus on the application. That's what's really different about cloud, when it comes to web security. You have to focus on the apps, because you have nothing else to go on.

Understand your business

Ellis: The first thing you have to do is to understand your own business. That's often the first mistake that security practitioners may make. They try to apply a common model of security thinking to very unique businesses. Even in one industry, everybody has a slightly different business model.

You have to understand what risks are acceptable to your business. Every business is in the practice of taking risk. That's how you make money. If you don't take any risk, you're not going to make money. So, understand that first. What are the risks that are acceptable to the business, and what are the ones that are unacceptable?

Security often lives in that gray area in between. How do we take risks that are neither fully acceptable nor fully unacceptable, and how do we manage them in a fashion to make them one or the other? If they're not acceptable, we don't take them, and if they are acceptable, we do. Hopefully we find a way to increase our revenue stream by taking those risks.

... There's a huge gap in what people think is secure and what people are doing today in trusting in the security in the cloud. When we look at our customer base, over 90 of the top 100 retailers on the Internet are using our cloud-based solutions to accelerate their applications -- and what's more mission-critical than expecting money from your customers?

At Akamai, we see that where people are saying, "The cloud is not secure, we can't trust the cloud." At the same time, business decision makers are evaluating the risk and moving forward in the cloud.

A lot of that is working with their vendors to understand their security practices and comparing that to what they would do themselves. Sometimes, there are shifts. Cloud gives you different capabilities that you might be able to take advantage of, once you're out in the cloud.

Hoff: I like to say that if your security stinks before you move to the cloud, you will be pleasantly unsurprised by change, because it’s not going to get any better -- or probably not even necessarily any worse -- when you move to cloud computing.

What we're learning today is that if we secure our information and applications properly and the infrastructure is able to deal with the dynamism, you will, by default, start to see derivative impacts and benefits on security, because our models will change. At least, our thinking about security models will change.

We in the security industry in some way try to hold the cloud providers to a higher standard. I'm not sure that the consumer, who actually uses these services, sees much of a difference in terms of what they expect, other than it should be up, it should be available, and it should be just as secure as any other Internet-based service they use.

Those cloud providers -- cloud service and cloud computing providers -- are in the business of making sure that they can offer you really robust delivery. At this time, they focus there. We have a challenge to take everything we have done previously, in all these other different models, still do that, and deal with some of the implementation and operational elements that cloud computing, elasticity, dynamism, and all this fantastic set of capabilities bring.

So we get wrapped around the axle many times in discussions about cloud, where a lot of what we are talking about still needs to be taken care of from an infrastructure and application standpoint.

Ellis: That’s the challenge for people who are moving out to the cloud. That area may be in the purview of the provider. While they may trust the provider, and the provider has done the best they can do in that arena, when they still see risks, they can no longer say, "I'll just put in a firewall. I'll just do this." Now, they have to tackle a really sticky wicket. Do you have a safe application wherever it lives?

That’s where people run into a challenge: "It’s cloud. Let me make the provider responsible." But, at the end of day, the overall risk structure is still the responsibility of the business. Ultimately, the data owner, the business who is actually using whatever the compute cycles are.

It's not yours

Grossman: To piggyback on what Andy said, something has been lost. When you host an application internally, you can build it, you can deploy it, and you can test it. Now, all of a sudden, you've brought in a cloud provider, on somebody else’s infrastructure, and you have to get permission to test it. It’s not yours anymore.

Actually, one of the big things [to attend to] out there is a right to test. You have no right to test these infrastructure systems. If you do so without permission, it's illegal. So, you have lost visibility. You've lost technical visibility and security of the application.

When the cloud provider changes the app, it changes the risk profile of the application, too, but you don’t know when that happens and you don’t know what the end result is. There's a disconnect between the consumer, the business, and the cloud computing provider or whatever the system is.

Hoff: Cloud computing has become a fantastic forcing function, because what its done to the business and to IT. We talked about paradigm shifts and how important this is in the overall advancement of computing.

The reality is that cloud causes people to say, "If the thing that’s most important to me is information and protecting that information, and applications are conduits to it, and the infrastructure allows it to flow, then maybe what I ought to do is take a big picture view of this. I ought to focus on protecting my information, content, and data, which is now even more interestingly a mixture of traditional data, but also voice and video and mixed media applications, social networks, and mashups."

Fantastic interconnectivity

T
he complexity comes about, because with collaboration, we have enabled all sorts of fantastic interconnectivity between what was previously disparate, little mini-islands, with mini-perimeters that we could secure relatively well.

The application security and the information security, tied in and tightly coupled with an awareness of the infrastructure that powers it, even though it’s supposed to be abstracted in cloud computing, is really where people have a difficult time grasping the concepts between where we are today and what cloud computing offers them or doesn’t, and what that means for the security models.

To view a full video of the panel discussion on cloud-based security, please go to the registration page.

Ellis: There's a great initiative going on right now called CloudAudit, which is aimed at helping people think through this security of a process and how you share controls between two disparate entities, so we can make those decisions at a higher level.

If I am trusting my cloud provider to provider some level of security, I should get some insight into what they're doing, so that I can make my decisions as a business unit. I can see changes there, the changes I am taking advantage of, and how that fits my entire software development life cycle.

Cloud computing, depending on who you talk to, encompasses almost everything; your kitchen blender, any element that you happen to connect to your enterprise and your home life.



It’s still nascent. People are still changing their mindset to think through that whole architecture, but we're starting to see that more and more -- certainly within our customer base -- as people think, "I'm out in the cloud. How is that different? What can I take advantage of that’s there that wasn’t there in my enterprise? What are the things that aren’t there that I am used to that now I have to shift and adapt to that change?"

Hoff: What's interesting about cloud computing as a derivative set of activities that you might have focused on from a governance perspective, with outsourcing, or any sort of thing where you have essentially given over control of the operation and administration of your assets and applications, is that you can outsource responsibility, but not necessarily accountability. That's something we need to remember.

Think about the notion of risk and risk management. I was on a panel the other day and somebody said, "You can't say risk management, because everyone says risk management." But, that's actually the answer. If I understand what's different and what is the same about cloud computing or the cloud computing implementation I am looking at, then I can make decisions on whether or not that information, that application, that data, ought to be put in the hands of somebody else.

No one-size-fits-all

In some cases, it can't be, for lots of real, valid reasons. There's no one-size-fits-all for cloud. Those issues force people to think about what is the same and what is different in cloud computing.

Previously, you introduced the discussion about the CSA. The thing we really worked on initially were 15 areas of concerns, and they're now consolidated to 13 areas of concern. What's different? What's the same? How do I need to focus on this? How can I map my compliance efforts? How can I assess, even if there are technical elements that are different in cloud computing? How can I assess the operational and cultural impacts?

Awareness of break-ins

Grossman: What I've seen in the last couple of years is that what drives security awareness is break-ins. Whether the bad guys are nation- or state-sponsored actors or whether they are organized criminals after credit card numbers, breaches happen. They're happening in record numbers, and they're stealing everything they can get their hands on.

Fortunately or unfortunately, from a cloud computing standpoint, all the attacks are largely the same, whether one application is here or in the cloud. You attack it directly, and all the methodologies to attack a website are the same. You have things like cross-site scripting, SQL injection, cross-site request forgery. They are all the same. That’s one way to access the data that you are after.

The other way is to get on the other half of web security. That’s the browser. You infect a website, the user runs into it, and they get infected. You email them a link. They click something. You infect them that way. Once you get on to the host machine, the client side of the connection, then you can leverage those credentials and then get into the cloud, the back-end way, the right way, and no one sees you.

Breaches make headlines. Headlines make people nervous, whether it's businesses or consumers. When a business outsources things to the cloud or a SaaS provider, they still have this nervous reaction about security, because their customers have this nervous reaction about security. So they start asking about security. "What are you doing to protect my data?"

All of a sudden, if that cloud provider, that vendor, takes security seriously and can prove it, demonstrate it, and get the market to accept it, security becomes a differentiating factor. It becomes an enabler of the top line, rather than a cost on the bottom line.

Ellis: I like to look at security as being a business-enabler in three areas. The obvious one, we all think, is risk reduction. How can I reduce my risk with cloud-based security services? Are there ways which I can get out there and do things safer? I'm not necessarily going to change anything else about my business. That's great and that's our normal model.

There are a lot of services available through the cloud that can be used to protect your brand and your revenue against loss, but also help you grow revenue.



Security can also be a revenue-enabler and it can also be a protection of revenue. Web application firewalls is a great example of fraud mitigation services. There are a lot of services available through the cloud that can be used to protect your brand and your revenue against loss, but also help you grow revenue. As you just said, it's all about trust. People go back to brands that they trust, and security can be a key component of that.

It doesn't always have to be visible to the end user, but as you noted with the car industry, people build the perception around incidents. If you can be incident-free compared to your competition, that's a huge differentiator, as you go down into more and deeper activities that require deep trust with your end users.

A lot of what we try to do is build a wrapper in a sandbox around each customer to give them the same, consistent level of security. A big challenge in the enterprise model is that for every application that you stand up, you have to build that security stack from the ground up.

The weak point is often the browser. Compromise the client, and you get access to the data.



One advantage cloud does give you is that, if you are working with somebody who has thought about this is, you can take advantages of practices that they have already instituted. So, you get some level of commonality. Then, if a customer sees something and says, "You should improve this," that improvement can affect an entire customer base. Cloud has a benefit there to match some of the weaknesses it may have elsewhere.

Historically, in the enterprise model, we think about data in terms of being tied to a given application. That’s not really accurate. The data still moves around inside an enterprise. As Jeremiah noted, the weak point is often the browser. Compromise the client, and you get access to the data.

As people move to cloud, they start to change their risk thinking. Now, they think about the data and everywhere it lives and that gives them an opportunity to change their own risk model and think about how they're protecting the data and not just a specific application it used to live in.

As we noted earlier, a large fraction of the Internet retailers are using cloud for their most mission-critical things, their financial data, coming through every time somebody buys something.

If you are willing to trust that level of data to the cloud, you are making some knee-jerk reaction about an internal web conference between 12 people and a presentation about something that frankly most people aren’t going to care about, and you are saying, "That’s too sensitive to be in the cloud." But your revenue stream could be in the cloud. Sometimes it shows that we think parochially about security in some places.

Grossman: What's interesting about security spending versus infrastructure spending or just general IT spending is that it seems security is diametrically opposed to the business. We spend the most money on applications and our data, but the least amount of security risk spend. We spend the least on infrastructure relative to applications, but that's where we spend the most of our security dollars. So you seem to be diametrically opposed.

What cloud computing does, and the reason for this talk, is that it flattens the world. It abstracts the cloud below and forces us to realign with the business. That's what cloud will bring in a good way. It's just that you have to do it commensurate with the business.

To view a full video of the panel discussion on cloud-based security, please go to the registration page.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. View the video. Sponsor: Akamai Technologies.

You may also be interested in:

Friday, June 4, 2010

Analysts probe future of client architectures as HTML 5 and client virtualization advances loom over desktops

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript, or read a full copy. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

The latest BriefingsDirect Analyst Insights Edition, Vol. 52, focuses on client-side architectures and the prospect of heightened disruption in the PC and device software arenas.

Such trends as cloud computing, service oriented architecture (SOA), social media, software as a service (SaaS), and virtualization are combining and overlapping to upset the client landscape. If more of what more users are doing with their clients involves services, then shouldn't the client be more services ready? Should we expect one client to do it all very well, or do we need to think more about specialized clients that might be configured on the fly?

Today's clients are more tied to the past than the future, where one size fits all. Most clients consist of a handful of entrenched PC platforms, a handful of established web browsers, and a handful of PC-like smartphones. But, what has become popular on the server, virtualization, is taken to its full potential on these edge devices. New types of dynamic and task specific client types might emerge. We'll take a look at what they might look like.

Also, just as Windows 7 for Microsoft is quickly entering the global PC market, cloud providers are in an increasingly strong position to potentially favor certain client types or data and configuration synchronization approaches. Will the client lead the cloud or vice versa? We'll talk about that too.

Either way, the new emphasis seems to be on full-media, webby activities, where standards and technologies are vying anew for some sort of a de-facto dominance across both rich applications as well as media presentation capabilities.

We look at the future of the client with a panel of analysts and guests: Chad Jones, Vice President for Product Management at Neocleus; Michael Rowley, CTO of Active Endpoints; Jim Kobielus, Senior Analyst at Forrester Research; Michael Dortch, Director of Research at Focus; JP Morgenthal, Chief Architect, Merlin International, and Dave Linthicum, CTO, Bick Group. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Jones: In the client market, it's time for disruption. Looking at the general PC architectures, we have seen that since pretty much the inception of the computer, you really still have one operating system (OS) that's bound to one machine, and that machine, according to a number of analysts, is less than 10 percent utilized.

Normally, that's because you can't share that resource and really take advantage of everything that modern hardware can offer you. Dual cores and all the gigabytes of RAM that are available on the client are all are great things, but if you can't have an architecture that can take advantage of that in a big way, then you get more of the same.

On the client side, virtualization is moving into all forms of computing. We've seen that with applications, storage, networks, and certainly the revolution that happened with VMware and the hypervisors on the server side. But, the benefits from the server virtualization side were not only the ability to run multiple OSs side-by-side and consolidate servers, which is great, but definitely not as relevant to the client side. It’s really the ability to manage the machine at the machine level and be able to take OSs and move them as individual blocks of functionality in those workloads.

The same thing for the client can become possible when you start virtualizing that endpoint and stop doing management of the OS as management of the PC, and be able to manage that PC at the root level.

Imagine that you have your own personal Windows OS, that maybe you have signed up for Microsoft’s new Intune service to manage that from the cloud standpoint. Then, you have another Google OS that comes down with applications that are specific from that Google service, and that desktop is running in parallel with Windows, because it’s fully controlled from a cloud provider like Google. Something like Chrome OS is truly a cloud-based OS, where everything is supposed to be stored up in the cloud.

Those kinds of services, in turn, can converge into the PC, and virtualization can take that to the next level on the endpoint, so that those two things don’t overlap with each other, and a level of service, which is important for the cloud, certainly for service level agreements (SLAs), can truly be attained. There will be a lot of flexibility there.

Virtualization is a key enabler into that, and is going to open up PC architectures to a whole brave new world of management and security. And, at a platform level, there will be things that we're not even seeing yet, things that developers can think of, because they have options to now run applications and agents and not be bound to just Windows itself. I think it’s going to be very interesting.

With virtualization, you have a whole new area where cloud providers can tie in at the PC level. They'll be able to bundle desktop services and deliver them in a number of unique ways.



Linthicum: Cloud providers will eventually get into desktop virtualization. It just seems to be the logical conclusion of where we're heading right now.

In other words, we're providing all these very heavy-duty IT services, such as database, OSs, and application servers on demand. It just makes sense that eventually we're going to provide complete desktop virtualization offerings that pop out of the cloud.

The beauty of that is that a small business, instead of having to maintain an IT staff, will just have to maintain a few clients. They log into a cloud account and the virtualized desktops come down.

It provides disaster recovery based on the architecture. It provides great scalability, because basically you're paying for each desktop instance and you're not paying for more or less than you need. So, you're not buying a data center or an inventory of computers and having to administer the users.

That said, it has a lot more cooking to occur, before we actually get the public clouds on that bandwagon. Over the next few years, it's primarily going to be an enterprise concept and it's going to be growing, but eventually it's going to reach the cloud.

There are going to be larger companies. Google and Microsoft are going to jump on this. Microsoft is a prime candidate for making this thing work, as long as they can provide something as a service, which is going to have the price point that the small-to-medium-sized businesses (SMBs) are going to accept, because they are the early adopters.

Browser-based client

Rowley: When we talk about the client, we're mostly thinking about the web-browser based client as opposed to the client as an entire virtualized OS. When you're using a business process management system (BPMS) and you involve people, at some point somebody is going to need to pull work off of a work list and work on it and then eventually complete it and go and get the next piece of work.

That’s done in a web-based environment, which isn’t particularly unusual. It's a fairly rich environment, which is something that a lot of applications are going to. Web-based applications are going to a rich Internet application (RIA) style.

We have tried to take it even a step further and have taken advantage of the fact that by moving to some of these real infrastructures, you can do not just some of the presentation tier of an application on the client. You can do the entire presentation tier on the web browser client and have its communication to the server, instead of being traditional HTML, have the entire presentation on the browser. Its communication uses more of a web-service approach and going directly into the services tier on the server. That server can be in a private cloud or, potentially, a public cloud.

You go directly from your browser client into the services tier on the server, and it just decreases the overall complexity of the entire system.



What's interesting is that by not having to install anything on the client, as with any of these discussions we are talking about, that's an advantage, but also on the server, not having to have a different presentation tier that's separate from your services tier.

You go directly from your browser client into the services tier on the server, and it just decreases the overall complexity of the entire system. That's possible, because we base it on Ajax, with JavaScript that uses a library that's becoming a de-facto standard called jQuery. jQuery has the power to communicate with the server and then do all of the presentation logic locally.

... I believe that Apple, growing dominant in the client space with both the iPhone and now the iPad, and its lack of support for either Silverlight or Flash, will be a push toward the standard space, the HTML5 using JavaScript, as the way of doing client-based rich Internet apps. There will be more of a coalescing around these technologies, so that potentially all of your apps can come through the one browser-based client.

Dortch: ... There are going to continue to be proprietary approaches to solving these problems. As the Buddhists like to say, many paths, one mountain. That's always going to be true. But, we've got to keep our eyes on the ultimate goal here, and that is, how do you deliver the most compelling services to the largest number of users with the most efficient use of your development resources?

Until the debate shifts more in that direction and stops being so, I want to call it, religious about bits and bytes and speeds and feeds, progress is going to be hampered. But, there's good news in HTML5, Android, Chrome, and those things. At the end of the day, there's going to be a lot of choices to be made.

The real choices to be made right now are centered on what path developers should take, so that, as the technologies evolve, they have to do as little ripping and replacing as possible. This is especially a challenge for larger companies running critical proprietary applications.

Morgenthal: I like to watch patterns. Look at where more applications have been created in the past three years, on what platform, and in what delivery mechanism than in any other way. Have they been web apps or have they been iPhone/Android apps?

You've got to admit that the web is a great vehicle for pure dynamic content. But, at the end of the day, when there is a static portion of at least the framework and the way that the information is presented, nothing beats that client that’s already there going out and getting a small subset of information, bringing it back, and displaying it.

I see us moving back to that model. The web is great for a fully connected high-bandwidth environment.

I've been following a lot about economics, especially U.S. economics, how the economy is going, and how it impacts everything. I had a great conversation with somebody who is in finance and investing, and we joked about how people are claiming they are getting evicted out of their homes. Their houses and homes are being foreclosed on. They can barely afford to eat. But, everybody in the family has an iPhone with a data plan.

Look what necessity has become, at least in the U.S., and I know it's probably similar in Korea, Japan, and parts of Europe. Your medium for delivery of content and information is that device in the palm that's got about a 300x200 display.

On the desktop, you have Adobe doing the same thing with AIR and its cross-platform, and it's a lot more interactive than some of the web stuff. JavaScript is great, but at some point, you do get degradation in functionality. At some point, you have to deliver too much data to make that really effective. That all goes away, when you have a consistent user interface (UI) that is downloadable and updatable automatically.

I have got a Droid now. Everyday I see that little icon in the corner; I have got updates for you. I have updated my Seismic three times, and my USA Today. It tells me when to update. It automatically updates my client. It's a very neutral type of platform, and it works very, very well as the main source for me to deliver content.

Virtualization is on many fronts, but I think what we are seeing on the phone explosion is a very good point. I get most of my information through my phone.



Now, sometimes, is that medium too small to get something more? Yeah. So where do I go? I go to my secondary source, which is my laptop. I use my phone as my usual connectivity medium to get my Internet.

So, while we have tremendous broadband capability growing around the world, we're living in a wireless world and wireless is becoming the common denominator for a delivery vehicle. It's limiting and controlling what we can get down to the end user in the client format.

Getting deconstructed

Kobielus: In fact, it's the whole notion of a PC being the paradigm here that's getting deconstructed. It has been deconstructed up the yin yang. If you look at what a PC is, and we often think about a desktop, it's actually simply a decomposition of services, rendering services, interaction services, connection and access, notifications, app execution, data processing, identity and authentication. These are all services that can and should be virtualized and abstracted to the cloud, private or public, because the clients themselves, the edges, are a losing battle, guys.

Try to pick winners here. This year, iPads are hot. Next year, it's something else. The year beyond, it's something else. What's going to happen is -- and we already know it's happening -- is that everything is getting hybridized like crazy.

All these different client or edge approaches are just going to continue to blur into each other. The important thing is that the PC becomes your personal cloud. It's all of these services that are available to you. The common denominator here for you as a user is that somehow your identity is abstracted across all the disparate services that you have access to.

All of these services are aware that you are Dave Linthicum, coming in through your iPad, or you are Dave Linthicum coming in through a standard laptop web browser, and so forth. Your identity and your content is all there and is all secure, in a sense, bringing process into there.

You don't normally think of a process as being a service that's specific to a client, but your hook into a process, any process, is your ability to log in. Then, have your credentials accepted and all of your privileges, permissions, and entitlements automatically provisioned to you.

Identity, in many ways, is the hook into this vast, personal cloud PC. That’s what’s happening.

Rowley: A lot of applications will really mix up the presentation of the work to be done by the people who are using the application, with the underlying business process that they are enabling.

If you can somehow tease those apart and get it so that the business process itself is represented, using something like a business process model, then have the work done by the person or people divided into a specific task that they are intended to do, you can have the task, at different times, be hosted by different kinds of clients.

Different rendering

O
r, depending on the person, whether they're using a smartphone or a full PC, they might get a different rendering of the task, without changing the application from the perspective of the business person who is trying to understand what's going on. Where are we in this process? What has happened? What has to happen yet? Etc.

Then, for the rendering itself, it's really useful to have that be as dynamic as possible and not have it be based on downloading an application, whether it's an iPhone app or a PC app that needs to be updated, and you get a little sign that says you need to update this app or the other.

When you're using something like HTML5, you can get it so that you get a lot of the functionality of some of these apps that currently you have to download, including things, as somebody brought up before, the question of what happens when you aren't connected or are on partially connected computing?

Up until now, web-based apps very much needed to be connected in order to do anything. HTML5 is going to include some capabilities around much more functionality that's available, even when you're disconnected. That will take the technology of a web-based client to even more circumstances, where you would currently need to download one.

It's a little bit of a change in thinking for some people to separate out those two concepts, the process from the UI for the individual task. But, once you do, you get a lot of value for it.

Jones: I can see that as part of it as well. When you're able to start taking abstraction of management and security from outside of those platforms and be able to treat that platform as a service, those things become much greater possibilities.

Percolate and cook

I
believe one of the gentlemen earlier commented that a lot of it needs some time to percolate and cook, and that’s absolutely the case. But, I see that within the next 10 years, the platform itself becomes a service, in which you can possibly choose which one you want. It’s delivered down from the cloud to you at a basic level.

That’s what you operate on, and then all of those other services come layered in on top of that as well, whether that’s partially through a concoction of virtualization and different OS platforms, coupled with cloud-based profiles, data access, applications and those things. That’s really the future that we're going to see here in the next 15 years or so.

... For the near-term, as the client space begins to shake out over the next couple of years, the immediate benefits are first around being able to take our deployment of at least the Windows platform, from a current state of, let's either have an image that's done at Dell or more the case, whenever I do a hardware refresh, every three to four years, that's when I deploy the OS. And, we take it to a point where you can actually get a PC and put it onto the network.

You take out all the complexity of what the deployment questions are and the installation that can cause so many different issues, combined with things like normalizing device driver models and those types of things, so that I can get that image and that computer out to the corporate standard very, very quickly, even if it's out in the middle of Timbuktu. That's one of the immediate benefits.

Plus, start looking at help desk and the whole concept of desktop visits. If Windows dies today, all of your agents and recovery and those types of things die with it. That means I've got to send back the PC or go through some lengthy process to try to talk the user through complicated procedures, and that's just an expensive proposition.

Still connect

You're able to take remote-control capabilities outside of Windows into something that's hardened at the PC level and say, okay, if Windows goes down, I can actually still connect to the PC as if I was local and remote connect to it and control it. It's like what the IP-based KVMs did for the data center. You don’t even have to walk into the data center now. Imagine that on a grand scale for client computing.

Couple in a VPN with that. Someone is at a Starbucks, 20 minutes before a presentation, with a simple driver update that went awry and they can't fix it. With one call to the help desk, they're able to remote to that PC through the firewalls and take care of that issue to get them up and working.

Those are the areas that are the lowest hanging fruit, combined with amping up security in a completely new paradigm. Imagine an antivirus that works, looking inside of Windows, but operates in the same resource or collision domain, an execution environment where the virus is actually working, or trying to execute.

There is a whole level of security upgrades that you can do, where you catch the viruses on the space in between the network and actually getting to a compatible execution environment in Windows, where you quarantine it before it even gets to an OS instance. All those areas have huge potential.

You have got to keep that rich user experience of the PC, but yet change the architecture, so that it could become more highly manageable or become highly manageable, but also become flexible as well.

Imagine a world, just cutting very quickly in the utility sense, where I've got my call center of 5,000 seats and I'm doing an interactive process, but I have got a second cord dedicated to a headless virtual machine that’s doing mutual fund arbitrage apps or something like that in a grid, and feeding that back. You're having 5,000 PCs doing that for you now at a very low cost rate, as opposed to building a whole data center capacity to take care of that. Those are kind of the futures where this type of technology can take you as well.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript, or read a full copy. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

You may also be interested in:

Thursday, June 3, 2010

Panda Security upgrades cloud-based anti-malware service to include auto updates

As more computing functions continue to exploit cloud delivery models, security issues remain a key concern. But the cloud also continues to be the solution to its own problem.

Extending it's cloud-based PC security and anti-malware services, Panda Security today moved to help further alleviate malware fears by expanding its free offerings to include a paid version that automates the updates and upgrades to the service. [Disclosure: Panda Security is a sponsor of BriefingsDirect podcasts.]

Dubbed Panda Cloud Antivrus Pro, the new edition works to protect computer users online and offline by extending the protections in the free product launched last year. The Free Edition is still available and also offers enhanced functions, while the Pro Edition sells for $29.95 and offers both automated updates as well as support benefits and other features.

Minimal performance impact

Besides being a popular free cloud security service for home users (about 10 million consumers have downloaded the free version to date), Panda Antivirus pushes another attention-getting message: minimal impact on computing performance. That has helped bring the service into use among SOHO, SMB and even some enterprise users.

Panda Antivirus relies on a proprietary technology for automatically collecting and processing millions of malware samples in the cloud, rather than locally on the consumer’s PC. The technology and method, called Collective Intelligence, can swiftly ID and thwart malware as it appears anywhere on the Internet and then update the clients with the fix.

Because the processing is largely done via cloud-based data centers, the client-bourn antivirus software uses a mere 15MB of RAM compared with the 60MB of RAM traditional signature-based antivirus products typically use. It also puts a loss less workload on the processor(s).

Panda Security is pushing the speed superiority of its Collective Intelligence platform in protecting PCs against both known and unknown malware. The company points to recent tests by AV-Test.org that compared leading antivirus programs. In those tests, Panda Cloud Antivirus outperformed the average zero day detection score of competitors by 42.5 percent, said Panda.

New functions and features

The Free Edition of Panda Cloud Antivirus offers some advanced configurations that let users customize certain features, like behavioral blocking and analysis, to meet the requirements of their systems. The Free Edition now also includes a behavioral blocker that protects against new malware and targeted attacks, as well as self-protection of antivirus files and configurations that prevent targeted malware attacks from disabling the software.

The Pro Edition offers all that and more, including automatic upgrades and automatic vaccination of USB and hard drives to eliminate the possibility of transmitting infections while users are offline and/or physically mobile. The Pro Edition also offers dynamic behavioral analysis to add a additional layer of protection by analyzing running processes and blocking any malicious behavior.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in: