Wednesday, June 9, 2010

Adopting cloud-calibre security now pays dividends across all IT security concerns

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. View the video. Sponsor: Akamai Technologies.

To view a full video of the panel discussion on cloud-based security, please go to the registration page.

Today's headlines point to more sophisticated and large-scale and malicious online activities. For some folks, therefore, the consensus seems to be that the cloud computing model and vision are not up to the task when it comes to security.

But at the RSA Conference earlier this year, a panel came together to talk about security and cloud computing, to examine the intersection of cloud computing, security, Internet services, and Internet-based security practices to uncover differences between perceptions and reality.

The result is a special sponsored BriefingsDirect podcast and video presentation that takes stock of cloud-focused security -- not just as a risk, but also as an amelioration of risk across all aspects of IT.

Join panelists Chris Hoff, Director of Cloud and Virtualization Solutions at Cisco Systems; Jeremiah Grossman, the founder and Chief Technology Officer at WhiteHat Security, and Andy Ellis, the Chief Security Architect at Akamai Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Grossman: An interesting paradigm shift is happening. When you look at website attacks, things haven't changed much. An application that exists in the enterprise is the same application that exists in the cloud. For us, when we are attacking websites and assessing their security, it doesn't really matter what infrastructure it's actually on. We break into it just as same as everything else.

Our job, in the website vulnerability management business, is to find those vulnerabilities ahead of time and help our customers fix those issues before they become larger problems. And if you look at any security report on the Web right now, as far as security goes, it's a web security world.

What's different [with cloud] among our customer base is that they can't run to their comfort zone. They can't run to secure their enterprise with firewalls, intrusion detection systems, and encryption. They have to focus on the application. That's what's really different about cloud, when it comes to web security. You have to focus on the apps, because you have nothing else to go on.

Understand your business

Ellis: The first thing you have to do is to understand your own business. That's often the first mistake that security practitioners may make. They try to apply a common model of security thinking to very unique businesses. Even in one industry, everybody has a slightly different business model.

You have to understand what risks are acceptable to your business. Every business is in the practice of taking risk. That's how you make money. If you don't take any risk, you're not going to make money. So, understand that first. What are the risks that are acceptable to the business, and what are the ones that are unacceptable?

Security often lives in that gray area in between. How do we take risks that are neither fully acceptable nor fully unacceptable, and how do we manage them in a fashion to make them one or the other? If they're not acceptable, we don't take them, and if they are acceptable, we do. Hopefully we find a way to increase our revenue stream by taking those risks.

... There's a huge gap in what people think is secure and what people are doing today in trusting in the security in the cloud. When we look at our customer base, over 90 of the top 100 retailers on the Internet are using our cloud-based solutions to accelerate their applications -- and what's more mission-critical than expecting money from your customers?

At Akamai, we see that where people are saying, "The cloud is not secure, we can't trust the cloud." At the same time, business decision makers are evaluating the risk and moving forward in the cloud.

A lot of that is working with their vendors to understand their security practices and comparing that to what they would do themselves. Sometimes, there are shifts. Cloud gives you different capabilities that you might be able to take advantage of, once you're out in the cloud.

Hoff: I like to say that if your security stinks before you move to the cloud, you will be pleasantly unsurprised by change, because it’s not going to get any better -- or probably not even necessarily any worse -- when you move to cloud computing.

What we're learning today is that if we secure our information and applications properly and the infrastructure is able to deal with the dynamism, you will, by default, start to see derivative impacts and benefits on security, because our models will change. At least, our thinking about security models will change.

We in the security industry in some way try to hold the cloud providers to a higher standard. I'm not sure that the consumer, who actually uses these services, sees much of a difference in terms of what they expect, other than it should be up, it should be available, and it should be just as secure as any other Internet-based service they use.

Those cloud providers -- cloud service and cloud computing providers -- are in the business of making sure that they can offer you really robust delivery. At this time, they focus there. We have a challenge to take everything we have done previously, in all these other different models, still do that, and deal with some of the implementation and operational elements that cloud computing, elasticity, dynamism, and all this fantastic set of capabilities bring.

So we get wrapped around the axle many times in discussions about cloud, where a lot of what we are talking about still needs to be taken care of from an infrastructure and application standpoint.

Ellis: That’s the challenge for people who are moving out to the cloud. That area may be in the purview of the provider. While they may trust the provider, and the provider has done the best they can do in that arena, when they still see risks, they can no longer say, "I'll just put in a firewall. I'll just do this." Now, they have to tackle a really sticky wicket. Do you have a safe application wherever it lives?

That’s where people run into a challenge: "It’s cloud. Let me make the provider responsible." But, at the end of day, the overall risk structure is still the responsibility of the business. Ultimately, the data owner, the business who is actually using whatever the compute cycles are.

It's not yours

Grossman: To piggyback on what Andy said, something has been lost. When you host an application internally, you can build it, you can deploy it, and you can test it. Now, all of a sudden, you've brought in a cloud provider, on somebody else’s infrastructure, and you have to get permission to test it. It’s not yours anymore.

Actually, one of the big things [to attend to] out there is a right to test. You have no right to test these infrastructure systems. If you do so without permission, it's illegal. So, you have lost visibility. You've lost technical visibility and security of the application.

When the cloud provider changes the app, it changes the risk profile of the application, too, but you don’t know when that happens and you don’t know what the end result is. There's a disconnect between the consumer, the business, and the cloud computing provider or whatever the system is.

Hoff: Cloud computing has become a fantastic forcing function, because what its done to the business and to IT. We talked about paradigm shifts and how important this is in the overall advancement of computing.

The reality is that cloud causes people to say, "If the thing that’s most important to me is information and protecting that information, and applications are conduits to it, and the infrastructure allows it to flow, then maybe what I ought to do is take a big picture view of this. I ought to focus on protecting my information, content, and data, which is now even more interestingly a mixture of traditional data, but also voice and video and mixed media applications, social networks, and mashups."

Fantastic interconnectivity

T
he complexity comes about, because with collaboration, we have enabled all sorts of fantastic interconnectivity between what was previously disparate, little mini-islands, with mini-perimeters that we could secure relatively well.

The application security and the information security, tied in and tightly coupled with an awareness of the infrastructure that powers it, even though it’s supposed to be abstracted in cloud computing, is really where people have a difficult time grasping the concepts between where we are today and what cloud computing offers them or doesn’t, and what that means for the security models.

To view a full video of the panel discussion on cloud-based security, please go to the registration page.

Ellis: There's a great initiative going on right now called CloudAudit, which is aimed at helping people think through this security of a process and how you share controls between two disparate entities, so we can make those decisions at a higher level.

If I am trusting my cloud provider to provider some level of security, I should get some insight into what they're doing, so that I can make my decisions as a business unit. I can see changes there, the changes I am taking advantage of, and how that fits my entire software development life cycle.

Cloud computing, depending on who you talk to, encompasses almost everything; your kitchen blender, any element that you happen to connect to your enterprise and your home life.



It’s still nascent. People are still changing their mindset to think through that whole architecture, but we're starting to see that more and more -- certainly within our customer base -- as people think, "I'm out in the cloud. How is that different? What can I take advantage of that’s there that wasn’t there in my enterprise? What are the things that aren’t there that I am used to that now I have to shift and adapt to that change?"

Hoff: What's interesting about cloud computing as a derivative set of activities that you might have focused on from a governance perspective, with outsourcing, or any sort of thing where you have essentially given over control of the operation and administration of your assets and applications, is that you can outsource responsibility, but not necessarily accountability. That's something we need to remember.

Think about the notion of risk and risk management. I was on a panel the other day and somebody said, "You can't say risk management, because everyone says risk management." But, that's actually the answer. If I understand what's different and what is the same about cloud computing or the cloud computing implementation I am looking at, then I can make decisions on whether or not that information, that application, that data, ought to be put in the hands of somebody else.

No one-size-fits-all

In some cases, it can't be, for lots of real, valid reasons. There's no one-size-fits-all for cloud. Those issues force people to think about what is the same and what is different in cloud computing.

Previously, you introduced the discussion about the CSA. The thing we really worked on initially were 15 areas of concerns, and they're now consolidated to 13 areas of concern. What's different? What's the same? How do I need to focus on this? How can I map my compliance efforts? How can I assess, even if there are technical elements that are different in cloud computing? How can I assess the operational and cultural impacts?

Awareness of break-ins

Grossman: What I've seen in the last couple of years is that what drives security awareness is break-ins. Whether the bad guys are nation- or state-sponsored actors or whether they are organized criminals after credit card numbers, breaches happen. They're happening in record numbers, and they're stealing everything they can get their hands on.

Fortunately or unfortunately, from a cloud computing standpoint, all the attacks are largely the same, whether one application is here or in the cloud. You attack it directly, and all the methodologies to attack a website are the same. You have things like cross-site scripting, SQL injection, cross-site request forgery. They are all the same. That’s one way to access the data that you are after.

The other way is to get on the other half of web security. That’s the browser. You infect a website, the user runs into it, and they get infected. You email them a link. They click something. You infect them that way. Once you get on to the host machine, the client side of the connection, then you can leverage those credentials and then get into the cloud, the back-end way, the right way, and no one sees you.

Breaches make headlines. Headlines make people nervous, whether it's businesses or consumers. When a business outsources things to the cloud or a SaaS provider, they still have this nervous reaction about security, because their customers have this nervous reaction about security. So they start asking about security. "What are you doing to protect my data?"

All of a sudden, if that cloud provider, that vendor, takes security seriously and can prove it, demonstrate it, and get the market to accept it, security becomes a differentiating factor. It becomes an enabler of the top line, rather than a cost on the bottom line.

Ellis: I like to look at security as being a business-enabler in three areas. The obvious one, we all think, is risk reduction. How can I reduce my risk with cloud-based security services? Are there ways which I can get out there and do things safer? I'm not necessarily going to change anything else about my business. That's great and that's our normal model.

There are a lot of services available through the cloud that can be used to protect your brand and your revenue against loss, but also help you grow revenue.



Security can also be a revenue-enabler and it can also be a protection of revenue. Web application firewalls is a great example of fraud mitigation services. There are a lot of services available through the cloud that can be used to protect your brand and your revenue against loss, but also help you grow revenue. As you just said, it's all about trust. People go back to brands that they trust, and security can be a key component of that.

It doesn't always have to be visible to the end user, but as you noted with the car industry, people build the perception around incidents. If you can be incident-free compared to your competition, that's a huge differentiator, as you go down into more and deeper activities that require deep trust with your end users.

A lot of what we try to do is build a wrapper in a sandbox around each customer to give them the same, consistent level of security. A big challenge in the enterprise model is that for every application that you stand up, you have to build that security stack from the ground up.

The weak point is often the browser. Compromise the client, and you get access to the data.



One advantage cloud does give you is that, if you are working with somebody who has thought about this is, you can take advantages of practices that they have already instituted. So, you get some level of commonality. Then, if a customer sees something and says, "You should improve this," that improvement can affect an entire customer base. Cloud has a benefit there to match some of the weaknesses it may have elsewhere.

Historically, in the enterprise model, we think about data in terms of being tied to a given application. That’s not really accurate. The data still moves around inside an enterprise. As Jeremiah noted, the weak point is often the browser. Compromise the client, and you get access to the data.

As people move to cloud, they start to change their risk thinking. Now, they think about the data and everywhere it lives and that gives them an opportunity to change their own risk model and think about how they're protecting the data and not just a specific application it used to live in.

As we noted earlier, a large fraction of the Internet retailers are using cloud for their most mission-critical things, their financial data, coming through every time somebody buys something.

If you are willing to trust that level of data to the cloud, you are making some knee-jerk reaction about an internal web conference between 12 people and a presentation about something that frankly most people aren’t going to care about, and you are saying, "That’s too sensitive to be in the cloud." But your revenue stream could be in the cloud. Sometimes it shows that we think parochially about security in some places.

Grossman: What's interesting about security spending versus infrastructure spending or just general IT spending is that it seems security is diametrically opposed to the business. We spend the most money on applications and our data, but the least amount of security risk spend. We spend the least on infrastructure relative to applications, but that's where we spend the most of our security dollars. So you seem to be diametrically opposed.

What cloud computing does, and the reason for this talk, is that it flattens the world. It abstracts the cloud below and forces us to realign with the business. That's what cloud will bring in a good way. It's just that you have to do it commensurate with the business.

To view a full video of the panel discussion on cloud-based security, please go to the registration page.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. View the video. Sponsor: Akamai Technologies.

You may also be interested in:

Friday, June 4, 2010

Analysts probe future of client architectures as HTML 5 and client virtualization advances loom over desktops

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript, or read a full copy. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

The latest BriefingsDirect Analyst Insights Edition, Vol. 52, focuses on client-side architectures and the prospect of heightened disruption in the PC and device software arenas.

Such trends as cloud computing, service oriented architecture (SOA), social media, software as a service (SaaS), and virtualization are combining and overlapping to upset the client landscape. If more of what more users are doing with their clients involves services, then shouldn't the client be more services ready? Should we expect one client to do it all very well, or do we need to think more about specialized clients that might be configured on the fly?

Today's clients are more tied to the past than the future, where one size fits all. Most clients consist of a handful of entrenched PC platforms, a handful of established web browsers, and a handful of PC-like smartphones. But, what has become popular on the server, virtualization, is taken to its full potential on these edge devices. New types of dynamic and task specific client types might emerge. We'll take a look at what they might look like.

Also, just as Windows 7 for Microsoft is quickly entering the global PC market, cloud providers are in an increasingly strong position to potentially favor certain client types or data and configuration synchronization approaches. Will the client lead the cloud or vice versa? We'll talk about that too.

Either way, the new emphasis seems to be on full-media, webby activities, where standards and technologies are vying anew for some sort of a de-facto dominance across both rich applications as well as media presentation capabilities.

We look at the future of the client with a panel of analysts and guests: Chad Jones, Vice President for Product Management at Neocleus; Michael Rowley, CTO of Active Endpoints; Jim Kobielus, Senior Analyst at Forrester Research; Michael Dortch, Director of Research at Focus; JP Morgenthal, Chief Architect, Merlin International, and Dave Linthicum, CTO, Bick Group. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Jones: In the client market, it's time for disruption. Looking at the general PC architectures, we have seen that since pretty much the inception of the computer, you really still have one operating system (OS) that's bound to one machine, and that machine, according to a number of analysts, is less than 10 percent utilized.

Normally, that's because you can't share that resource and really take advantage of everything that modern hardware can offer you. Dual cores and all the gigabytes of RAM that are available on the client are all are great things, but if you can't have an architecture that can take advantage of that in a big way, then you get more of the same.

On the client side, virtualization is moving into all forms of computing. We've seen that with applications, storage, networks, and certainly the revolution that happened with VMware and the hypervisors on the server side. But, the benefits from the server virtualization side were not only the ability to run multiple OSs side-by-side and consolidate servers, which is great, but definitely not as relevant to the client side. It’s really the ability to manage the machine at the machine level and be able to take OSs and move them as individual blocks of functionality in those workloads.

The same thing for the client can become possible when you start virtualizing that endpoint and stop doing management of the OS as management of the PC, and be able to manage that PC at the root level.

Imagine that you have your own personal Windows OS, that maybe you have signed up for Microsoft’s new Intune service to manage that from the cloud standpoint. Then, you have another Google OS that comes down with applications that are specific from that Google service, and that desktop is running in parallel with Windows, because it’s fully controlled from a cloud provider like Google. Something like Chrome OS is truly a cloud-based OS, where everything is supposed to be stored up in the cloud.

Those kinds of services, in turn, can converge into the PC, and virtualization can take that to the next level on the endpoint, so that those two things don’t overlap with each other, and a level of service, which is important for the cloud, certainly for service level agreements (SLAs), can truly be attained. There will be a lot of flexibility there.

Virtualization is a key enabler into that, and is going to open up PC architectures to a whole brave new world of management and security. And, at a platform level, there will be things that we're not even seeing yet, things that developers can think of, because they have options to now run applications and agents and not be bound to just Windows itself. I think it’s going to be very interesting.

With virtualization, you have a whole new area where cloud providers can tie in at the PC level. They'll be able to bundle desktop services and deliver them in a number of unique ways.



Linthicum: Cloud providers will eventually get into desktop virtualization. It just seems to be the logical conclusion of where we're heading right now.

In other words, we're providing all these very heavy-duty IT services, such as database, OSs, and application servers on demand. It just makes sense that eventually we're going to provide complete desktop virtualization offerings that pop out of the cloud.

The beauty of that is that a small business, instead of having to maintain an IT staff, will just have to maintain a few clients. They log into a cloud account and the virtualized desktops come down.

It provides disaster recovery based on the architecture. It provides great scalability, because basically you're paying for each desktop instance and you're not paying for more or less than you need. So, you're not buying a data center or an inventory of computers and having to administer the users.

That said, it has a lot more cooking to occur, before we actually get the public clouds on that bandwagon. Over the next few years, it's primarily going to be an enterprise concept and it's going to be growing, but eventually it's going to reach the cloud.

There are going to be larger companies. Google and Microsoft are going to jump on this. Microsoft is a prime candidate for making this thing work, as long as they can provide something as a service, which is going to have the price point that the small-to-medium-sized businesses (SMBs) are going to accept, because they are the early adopters.

Browser-based client

Rowley: When we talk about the client, we're mostly thinking about the web-browser based client as opposed to the client as an entire virtualized OS. When you're using a business process management system (BPMS) and you involve people, at some point somebody is going to need to pull work off of a work list and work on it and then eventually complete it and go and get the next piece of work.

That’s done in a web-based environment, which isn’t particularly unusual. It's a fairly rich environment, which is something that a lot of applications are going to. Web-based applications are going to a rich Internet application (RIA) style.

We have tried to take it even a step further and have taken advantage of the fact that by moving to some of these real infrastructures, you can do not just some of the presentation tier of an application on the client. You can do the entire presentation tier on the web browser client and have its communication to the server, instead of being traditional HTML, have the entire presentation on the browser. Its communication uses more of a web-service approach and going directly into the services tier on the server. That server can be in a private cloud or, potentially, a public cloud.

You go directly from your browser client into the services tier on the server, and it just decreases the overall complexity of the entire system.



What's interesting is that by not having to install anything on the client, as with any of these discussions we are talking about, that's an advantage, but also on the server, not having to have a different presentation tier that's separate from your services tier.

You go directly from your browser client into the services tier on the server, and it just decreases the overall complexity of the entire system. That's possible, because we base it on Ajax, with JavaScript that uses a library that's becoming a de-facto standard called jQuery. jQuery has the power to communicate with the server and then do all of the presentation logic locally.

... I believe that Apple, growing dominant in the client space with both the iPhone and now the iPad, and its lack of support for either Silverlight or Flash, will be a push toward the standard space, the HTML5 using JavaScript, as the way of doing client-based rich Internet apps. There will be more of a coalescing around these technologies, so that potentially all of your apps can come through the one browser-based client.

Dortch: ... There are going to continue to be proprietary approaches to solving these problems. As the Buddhists like to say, many paths, one mountain. That's always going to be true. But, we've got to keep our eyes on the ultimate goal here, and that is, how do you deliver the most compelling services to the largest number of users with the most efficient use of your development resources?

Until the debate shifts more in that direction and stops being so, I want to call it, religious about bits and bytes and speeds and feeds, progress is going to be hampered. But, there's good news in HTML5, Android, Chrome, and those things. At the end of the day, there's going to be a lot of choices to be made.

The real choices to be made right now are centered on what path developers should take, so that, as the technologies evolve, they have to do as little ripping and replacing as possible. This is especially a challenge for larger companies running critical proprietary applications.

Morgenthal: I like to watch patterns. Look at where more applications have been created in the past three years, on what platform, and in what delivery mechanism than in any other way. Have they been web apps or have they been iPhone/Android apps?

You've got to admit that the web is a great vehicle for pure dynamic content. But, at the end of the day, when there is a static portion of at least the framework and the way that the information is presented, nothing beats that client that’s already there going out and getting a small subset of information, bringing it back, and displaying it.

I see us moving back to that model. The web is great for a fully connected high-bandwidth environment.

I've been following a lot about economics, especially U.S. economics, how the economy is going, and how it impacts everything. I had a great conversation with somebody who is in finance and investing, and we joked about how people are claiming they are getting evicted out of their homes. Their houses and homes are being foreclosed on. They can barely afford to eat. But, everybody in the family has an iPhone with a data plan.

Look what necessity has become, at least in the U.S., and I know it's probably similar in Korea, Japan, and parts of Europe. Your medium for delivery of content and information is that device in the palm that's got about a 300x200 display.

On the desktop, you have Adobe doing the same thing with AIR and its cross-platform, and it's a lot more interactive than some of the web stuff. JavaScript is great, but at some point, you do get degradation in functionality. At some point, you have to deliver too much data to make that really effective. That all goes away, when you have a consistent user interface (UI) that is downloadable and updatable automatically.

I have got a Droid now. Everyday I see that little icon in the corner; I have got updates for you. I have updated my Seismic three times, and my USA Today. It tells me when to update. It automatically updates my client. It's a very neutral type of platform, and it works very, very well as the main source for me to deliver content.

Virtualization is on many fronts, but I think what we are seeing on the phone explosion is a very good point. I get most of my information through my phone.



Now, sometimes, is that medium too small to get something more? Yeah. So where do I go? I go to my secondary source, which is my laptop. I use my phone as my usual connectivity medium to get my Internet.

So, while we have tremendous broadband capability growing around the world, we're living in a wireless world and wireless is becoming the common denominator for a delivery vehicle. It's limiting and controlling what we can get down to the end user in the client format.

Getting deconstructed

Kobielus: In fact, it's the whole notion of a PC being the paradigm here that's getting deconstructed. It has been deconstructed up the yin yang. If you look at what a PC is, and we often think about a desktop, it's actually simply a decomposition of services, rendering services, interaction services, connection and access, notifications, app execution, data processing, identity and authentication. These are all services that can and should be virtualized and abstracted to the cloud, private or public, because the clients themselves, the edges, are a losing battle, guys.

Try to pick winners here. This year, iPads are hot. Next year, it's something else. The year beyond, it's something else. What's going to happen is -- and we already know it's happening -- is that everything is getting hybridized like crazy.

All these different client or edge approaches are just going to continue to blur into each other. The important thing is that the PC becomes your personal cloud. It's all of these services that are available to you. The common denominator here for you as a user is that somehow your identity is abstracted across all the disparate services that you have access to.

All of these services are aware that you are Dave Linthicum, coming in through your iPad, or you are Dave Linthicum coming in through a standard laptop web browser, and so forth. Your identity and your content is all there and is all secure, in a sense, bringing process into there.

You don't normally think of a process as being a service that's specific to a client, but your hook into a process, any process, is your ability to log in. Then, have your credentials accepted and all of your privileges, permissions, and entitlements automatically provisioned to you.

Identity, in many ways, is the hook into this vast, personal cloud PC. That’s what’s happening.

Rowley: A lot of applications will really mix up the presentation of the work to be done by the people who are using the application, with the underlying business process that they are enabling.

If you can somehow tease those apart and get it so that the business process itself is represented, using something like a business process model, then have the work done by the person or people divided into a specific task that they are intended to do, you can have the task, at different times, be hosted by different kinds of clients.

Different rendering

O
r, depending on the person, whether they're using a smartphone or a full PC, they might get a different rendering of the task, without changing the application from the perspective of the business person who is trying to understand what's going on. Where are we in this process? What has happened? What has to happen yet? Etc.

Then, for the rendering itself, it's really useful to have that be as dynamic as possible and not have it be based on downloading an application, whether it's an iPhone app or a PC app that needs to be updated, and you get a little sign that says you need to update this app or the other.

When you're using something like HTML5, you can get it so that you get a lot of the functionality of some of these apps that currently you have to download, including things, as somebody brought up before, the question of what happens when you aren't connected or are on partially connected computing?

Up until now, web-based apps very much needed to be connected in order to do anything. HTML5 is going to include some capabilities around much more functionality that's available, even when you're disconnected. That will take the technology of a web-based client to even more circumstances, where you would currently need to download one.

It's a little bit of a change in thinking for some people to separate out those two concepts, the process from the UI for the individual task. But, once you do, you get a lot of value for it.

Jones: I can see that as part of it as well. When you're able to start taking abstraction of management and security from outside of those platforms and be able to treat that platform as a service, those things become much greater possibilities.

Percolate and cook

I
believe one of the gentlemen earlier commented that a lot of it needs some time to percolate and cook, and that’s absolutely the case. But, I see that within the next 10 years, the platform itself becomes a service, in which you can possibly choose which one you want. It’s delivered down from the cloud to you at a basic level.

That’s what you operate on, and then all of those other services come layered in on top of that as well, whether that’s partially through a concoction of virtualization and different OS platforms, coupled with cloud-based profiles, data access, applications and those things. That’s really the future that we're going to see here in the next 15 years or so.

... For the near-term, as the client space begins to shake out over the next couple of years, the immediate benefits are first around being able to take our deployment of at least the Windows platform, from a current state of, let's either have an image that's done at Dell or more the case, whenever I do a hardware refresh, every three to four years, that's when I deploy the OS. And, we take it to a point where you can actually get a PC and put it onto the network.

You take out all the complexity of what the deployment questions are and the installation that can cause so many different issues, combined with things like normalizing device driver models and those types of things, so that I can get that image and that computer out to the corporate standard very, very quickly, even if it's out in the middle of Timbuktu. That's one of the immediate benefits.

Plus, start looking at help desk and the whole concept of desktop visits. If Windows dies today, all of your agents and recovery and those types of things die with it. That means I've got to send back the PC or go through some lengthy process to try to talk the user through complicated procedures, and that's just an expensive proposition.

Still connect

You're able to take remote-control capabilities outside of Windows into something that's hardened at the PC level and say, okay, if Windows goes down, I can actually still connect to the PC as if I was local and remote connect to it and control it. It's like what the IP-based KVMs did for the data center. You don’t even have to walk into the data center now. Imagine that on a grand scale for client computing.

Couple in a VPN with that. Someone is at a Starbucks, 20 minutes before a presentation, with a simple driver update that went awry and they can't fix it. With one call to the help desk, they're able to remote to that PC through the firewalls and take care of that issue to get them up and working.

Those are the areas that are the lowest hanging fruit, combined with amping up security in a completely new paradigm. Imagine an antivirus that works, looking inside of Windows, but operates in the same resource or collision domain, an execution environment where the virus is actually working, or trying to execute.

There is a whole level of security upgrades that you can do, where you catch the viruses on the space in between the network and actually getting to a compatible execution environment in Windows, where you quarantine it before it even gets to an OS instance. All those areas have huge potential.

You have got to keep that rich user experience of the PC, but yet change the architecture, so that it could become more highly manageable or become highly manageable, but also become flexible as well.

Imagine a world, just cutting very quickly in the utility sense, where I've got my call center of 5,000 seats and I'm doing an interactive process, but I have got a second cord dedicated to a headless virtual machine that’s doing mutual fund arbitrage apps or something like that in a grid, and feeding that back. You're having 5,000 PCs doing that for you now at a very low cost rate, as opposed to building a whole data center capacity to take care of that. Those are kind of the futures where this type of technology can take you as well.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript, or read a full copy. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

You may also be interested in:

Thursday, June 3, 2010

Panda Security upgrades cloud-based anti-malware service to include auto updates

As more computing functions continue to exploit cloud delivery models, security issues remain a key concern. But the cloud also continues to be the solution to its own problem.

Extending it's cloud-based PC security and anti-malware services, Panda Security today moved to help further alleviate malware fears by expanding its free offerings to include a paid version that automates the updates and upgrades to the service. [Disclosure: Panda Security is a sponsor of BriefingsDirect podcasts.]

Dubbed Panda Cloud Antivrus Pro, the new edition works to protect computer users online and offline by extending the protections in the free product launched last year. The Free Edition is still available and also offers enhanced functions, while the Pro Edition sells for $29.95 and offers both automated updates as well as support benefits and other features.

Minimal performance impact

Besides being a popular free cloud security service for home users (about 10 million consumers have downloaded the free version to date), Panda Antivirus pushes another attention-getting message: minimal impact on computing performance. That has helped bring the service into use among SOHO, SMB and even some enterprise users.

Panda Antivirus relies on a proprietary technology for automatically collecting and processing millions of malware samples in the cloud, rather than locally on the consumer’s PC. The technology and method, called Collective Intelligence, can swiftly ID and thwart malware as it appears anywhere on the Internet and then update the clients with the fix.

Because the processing is largely done via cloud-based data centers, the client-bourn antivirus software uses a mere 15MB of RAM compared with the 60MB of RAM traditional signature-based antivirus products typically use. It also puts a loss less workload on the processor(s).

Panda Security is pushing the speed superiority of its Collective Intelligence platform in protecting PCs against both known and unknown malware. The company points to recent tests by AV-Test.org that compared leading antivirus programs. In those tests, Panda Cloud Antivirus outperformed the average zero day detection score of competitors by 42.5 percent, said Panda.

New functions and features

The Free Edition of Panda Cloud Antivirus offers some advanced configurations that let users customize certain features, like behavioral blocking and analysis, to meet the requirements of their systems. The Free Edition now also includes a behavioral blocker that protects against new malware and targeted attacks, as well as self-protection of antivirus files and configurations that prevent targeted malware attacks from disabling the software.

The Pro Edition offers all that and more, including automatic upgrades and automatic vaccination of USB and hard drives to eliminate the possibility of transmitting infections while users are offline and/or physically mobile. The Pro Edition also offers dynamic behavioral analysis to add a additional layer of protection by analyzing running processes and blocking any malicious behavior.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Wednesday, June 2, 2010

WSO2 tailors open-source middleware platform for cloud-based applications, deployment models

WSO2 today announced the debut of WSO2 Stratos, an open-source middleware platform for cloud-based enterprise applications.

The on-demand or on-premises Stratos platform fosters building and deploying applications and services specifically for platform as a service (PaaS)-type deployments. Stratos goes beyond plain vanilla PaaS, however, by automating provisioning of enterprise servers, including the portal, enterprise service bus (ESB), and application servers. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

The announcement marks WSO2's entry into the emerging market for enterprise PaaS, as well as enabling hybrid computing models. Using Stratos, applications can be created or migrated on-premise, to a private cloud, or to the public cloud for potentially unprecedented deployment flexibility.

I like to call this flexibility fungibility of applications and services, meaning the apps can be moved from various cloud models across various cloud providers and platforms with minimal rework and configuration headaches. We're ways off from cloud applications fungibility, but users should be demanding it, and therefore resisting cloud lock-in. Open source is an important part of cloud fungibility, but more standards are needed.

Through its integration layer, WSO2 Stratos installs onto any existing cloud infrastructure -- Eucalyptus, Ubuntu Enterprise Cloud, Amazon Elastic Computing Cloud (EC2), and VMware ESX, to name a few -- meaning enterprises can let the market work for them, and resist being lockeded into a specific infrastructure provider or platform.

“At a time when IT developers can create a new application in one month, taking months to provision and deploy servers and systems no longer makes strategic or economic sense,” said Dr. Sanjiva Weerawarana, WSO2 founder and CEO. “WSO2 Stratos provides a complete middleware platform for delivering robust applications on private clouds, as well as migrating between and integrating with public clouds and on-premise systems—and there’s never cloud lock-in.”

Once Stratos is installed, users get a Web-based management portal where they can configure, manage and govern independent, but consistent, servers for each department, or for each stage of a system’s lifecycle. Each server is completely virtual, scaling up automatically to handle the required load, and metered and billed according to use.

At the heart of the WSO2 Stratos PaaS is a Cloud Manager, which manages all other services and offers a portal where users can log in and register their domain (tenant), manage their account, and configure the middleware services that are available for their users. The Cloud Manager offers point-and-click simplicity for configuring and provisioning middleware services, so developers can get started immediately and focus on the business logic.

WSO2 Carbon

Stratos is built on top of and extends WSO2 Carbon, the company's componentized middleware platform. WSO2 last month announced Carbon 3.0, which lets developers point-and-click to tailor middleware functionality into a customized solution.

The new WS-Discovery support automates the configuration of a project spanning multiple Web service endpoints. WSO2 Carbon 3.0 also features enhanced integration of the WSO2 Governance Registry across the platform, facilitating governance and monitoring across large clustered deployments and cloud implementations.

At a time when IT developers can create a new application in one month, taking months to provision and deploy servers and systems no longer makes strategic or economic sense.



The Carbon 3.0 Component Manager features a checkbox user interface (UI) where IT professionals start with a lean core and can click to add the functionality they want to their WSO2 middleware products—choosing from among more than 150 features.

WS-Discovery support allows Carbon 3.0 to automatically discover nearby service endpoints, freeing IT professionals from much of the rewiring work typically required when deploying a new set of services or moving existing ones. This facilitates the ability to move deployments between different servers, private clouds, or public clouds.

Enhanced Governance Registry integration across the entire Carbon 3.0 middleware platform increases the ability to govern and monitor large-scale deployments, including clustered servers and cloud implementations.

Availability and support

W
SO2 Stratos is available today as an early adopter release for private clouds, as a demonstration version on public clouds, and as an early release of the downloadable open source software.

WSO2 also today launched the WSO2 Cloud Partnership Initiative around WSO2 Stratos. WSO2 is partnering with systems integrators (SIs) and infrastructure-as-a-service (IaaS) providers to streamline the development and deployment of applications and services for enterprise clouds.

WSO2 is providing a "fast-track path" for SIs to use WSO2 Stratos for cloud-enabling their customers’ existing applications and services, building and delivering new SaaS offerings, and creating new vertical PaaS/SaaS templates to support industry-specific applications and services, said WSO2. SIs that join the initative gain complementary training, including set-up of a pilot private cloud based on WSO2 Stratos; revenue sharing on initial WSO2 Stratos-based deployments; and a commission on recurring production support revenue.

SIs Cognizant Technology Solutions and WebScience, one of Italy’s leading providers of technology and consulting services, have already joined, said WSO2.

WSO2 is also establishing partnerships with leading IaaS providers including Amazon Web Services, Canonical/Ubuntu, and VMware.

WSO2 Carbon 3.0 middleware products are available as software downloads and as WSO2 Cloud Virtual Machines running on the Amazon Elastic Computing Cloud (EC2), Linux Kernel Virtual Machine (KVM), or VMware ESX. As fully open source solutions released under the Apache License 2.0, WSO2 SOA middleware products do not carry any licensing fees.

In conjunction with WSO2 Stratos, WSO2 is launching its new CloudStart Program. The service, priced at $17,500, provides an engineering team onsite for a week to deploy WSO2 Stratos on either Ubuntu Enterprise Cloud or the customer’s existing cloud infrastructure. Working hand-in-hand with the customer development team, the WSO2 experts build a lightweight implementation or proof-of-concept. They then follow up the onsite engagement with offsite development support.

You may also be interested in:

What can businesses learn about predictive analytics from American Idol?

This guest post comes courtesy of Rick Kawamura, Director of Marketing at Kapow Technologies.

By Rick Kawamura

Social media data continues to grow at astronomical rates. Last year Twitter grew 1,444 percent with over 50 million tweets sent each day, and Facebook now has over 400 million active users. Every minute, 600 new blog posts are published, 34,000 tweets are sent, and 240,000 pieces of content are shared on Facebook.

The numbers are absolutely astounding. But is social media data credible? And can tangible business intelligence (BI) be extracted from it? [Disclosure: Kapow Technologies is a sponsor of BriefingsDirect podcasts.]

Reality Buzz, a new social media analysis project powered by web data services technology, was created to answer this very question by examining if real-time analysis of social media conversations can predict the outcome of popular reality television shows like American Idol and Dancing with the Stars. After Reality Buzz collected tens of thousands of tweets, comments, and discussions about contestants on both programs each week and applied sentiment analysis to the data, there was very clear, data driven insight to predict the contestants to be eliminated.

Stepping outside the example of "reality" TV, social media sentiment can be a powerful source of data that arms organizations with real-time intelligence to make more strategic business decisions. Based on experience with Reality Buzz, here are five tips for extracting real value from social media data:
Data trumps conventional wisdom

While Malcolm Gladwell, author of Blink: The Power of Thinking Without Thinking, would say otherwise, data-driven business decisions definitely outperform guesswork. Week after week on Dancing with the Stars, the infamous Kate Gosselin held up to 40 percent of all conversations in social media. Unfortunately for Kate, 95 percent of those comments were negative.

Conventional wisdom said that she should pack her bags. Yet the data showed despite all the negative conversations, she still had more share of positive comments than several other contestants, meaning she was far less likely to be eliminated. Because viewers vote for contestants they’d like to keep on the show, there is a strong correlation to positive sentiment. It wasn’t until the fourth week that Kate’s volume of positive comments died down and she was voted off.

Product managers deal with this dilemma all the time. Tasked with determining the next set of product features to drive greater profitability, they have to manage the CEO’s gut feel while also satisfying the needs of those who have to sell it, both of whom want it better, cheaper and faster. But “better, cheaper, faster” isn’t a great long-term strategy. A great product manager would look to the data to find unmet needs and untapped markets, and social media is a great place to find these hidden nuggets of intelligence.

Timing is critical

Any data over 24 hours old is pretty much worthless for predicting who will be eliminated from a reality TV show. The same holds true in the business world, where it’s imperative for the data to be as close to an event as possible, as this data has the strongest effect on sentiment.

Weeks old data may prove costly, resulting in more damage to the brand and revenue.



When launching a new product, for example, companies need to consider sentiment immediately prior to and after the launch. The same applies to a marketing campaign. Say Toyota releases a full page ad in The Wall Street Journal only to get a report on sentiment a few weeks later. Worthless. Companies need to know their customer’s sentiment just before they publish the ad to create the most relevant message, and immediately following to measure its resonance with their audience. Weeks-old data may prove costly, resulting in more damage to the brand and revenue by further demonstrating lack of understanding and responsiveness to frustrated customers.

Don’t be blind to the noise factor

It’s easy to understand trends, changes in momentum, volume of traffic, and ratio of positive to negative sentiment. However there is a lot of noise that can easily skew the data, especially with large, very public shows like American Idol. The bigger the show, product, etc., the more noise. This is most prominent in Twitter, which very often represents the largest source and volume of data. Despite the noise, though, there is valuable information that shouldn’t be ignored. Interestingly, most of the noise resides in neutral sentiment, not positive or negative. These are comments, articles, and reviews about a brand that don’t provide any real opinion.

This is why it’s important to understand how to filter the data to maintain its quality and relevance.

Not all social media sentiment created equal

Companies need to clearly define their goals before analyzing social media data. There are differing degrees of sentiment, and not all translate equally well. Most sentiment analysis tools begin by separating data into positive and negative groups. Yet even within each fan group there are varying degrees of support for contestants. In trying to determine the number of votes for a contestant, consider this data: “I just voted 100 times for Casey” vs. “My top 3 are Lee, Michael and Casey” vs. retweeting a link to a video or article which mentions Casey.

Companies also need to consider how to weigh one tweet versus a Facebook comment versus a blog post.



The reality is that not all data is needed or equal in weight. For American Idol, votes are cast for the person you want to keep on the show, so negative sentiment has little correlation to who will be voted off. This requires factoring out negative comments from total sentiment to get the most accurate prediction. Companies also need to consider how to weigh one tweet versus a Facebook comment versus a blog post. Each is just one piece of data, but does each one count equally?

Don’t look at data in a vacuum

H
aving knowledge of events and circumstances is critical to understanding and extracting intelligence from social media data. In the case of Reality Buzz, it was helpful to watch the performance shows for added context. This process is key for companies to raise other hypotheses to further investigate after they’ve seen the output.

Similarly, some manual data review is also essential to ensure quality and consistency. For example, when using an automated sentiment analysis tool, companies can weigh keywords differently. In addition, automated tools are not yet capable at distinguishing sentiment as functional, emotional or behavioral. So in monitoring social media data, there had to be a huge difference between “I like my new Canon camera” and “I just told my friend to buy the new Canon camera.” While both positive sentiments, the latter should be weighed much more heavily.
The growing mass of social media data is definitely a treasure trove of insight to extract intelligence, whether predicting reality show winners or moving your business forward. When done correctly, collecting and analyzing social media sentiment can be a pain-free, powerful tool for real-time feedback, predictive analytics and getting the competitive edge you need to win.
Rick Kawamura is Director of Marketing at Kapow Technologies, a leading provider of Web data services. Rick was most recently VP of Marketing at DeNA Global, and previously held strategic and product management roles at Palm and Sun Microsystems. He can be reached at rick.kawamura@kapowtech.com.
You may also be interested in:

Tuesday, June 1, 2010

Ariba, IBM deal shows emerging prominence of cloud ecosystem-based collaboration and commerce

The more you delve into how cloud computing can reshape business, the more clear becomes the importance of ecosystems.

No one cloud provider is likely to forecast and deliver all that any business needs or wants. More importantly, the role of the cloud provider is less about providing complete services than in enabling the ease and adaptability of acquiring, delivering and monetizing a variety of services in dynamic combination.

We're now seeing that the marketplace of cloud-hosted APIs is rich and exploding. But it's a self-service, organic market model that's emerging-- not a top-down ERP-like affair. And that is likely to make all the difference in terms of fast adoption.

Do providers like Apple, Google and Amazon produce the lion's share of services themselves -- or do they provide a fertile garden in which others create services and APIs that make the garden most valuable to all participants, inviting more guests, more development, more collaboration?

The organic model is also likely to repeat in ecosystems that allow buyers and sellers to align, and business processes between and among them to flourish. The business-to-business (B2B) commerce cloud is now being built. Recent acquisitions, like IBM's buy of Cast Iron and intent to buy Sterling Commerce, point up the "business garden" goals of Big Blue. Cast Iron allows the cultivation of hybrid clouds, clouds of clouds and rich services integration. Sterling brings EDI-based networks into the fold.

IBM clearly likes the idea of playing match-maker between traditional and new business models.



IBM clearly likes the idea of playing match-maker between traditional and new business models. And this cloud garden party effect aligns perfectly with IBM's tendency to avoid providing packaged business applications in favor of the platforms, middleware, process enablement and collaboration capabilities that support others' discrete applications.

Last week's announcement then of a cloud collaboration partnership between IBM and Ariba furthers the emerging prominence of cloud commerce ecosystems. To encourage more ecommerce, the IBM-Ariba deal matches B2B buyers and sellers via LotusLive collaboration and social networking services, all through cloud delivery models.

Conference capstone

The announcement came as a capstone to the Ariba Live 2010 conference in Orlando. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.] I had fun at the conference spouting off on cloud benefits, and tweeting up some of the mainstage events under #AribaLive.

Ariba plans to integrate its Ariba Commerce Cloud with IBM LotusLive to help buyers and sellers communicate and share information more fluidly and effectively, leading to faster, more confident business decisions, the companies said. Ariba plans to integrate IBM’s LotusLive with Ariba Discovery, a web-based service that helps buyers and sellers find each other quickly and automatically helps match buyers’ requirements to seller capabilities.

Both Ariba and IBM are recognizing the power and huge opportunity of being at the center of cloud-based commerce. And being at the center means allowing the participants to do the actual driving, to enable the community to seek and find natural partners via social interactions. We're likely to see the equivalent of app stores and social networks well up for B2B commerce, scaling both down and up, in the coming months and years.

What's now good for consumer commerce is soon to be good for the business side of the equation. It's simply the most efficient.



“The successful combination of LotusLive and the Ariba Commerce Cloud will provide such a matchmaking comfort zone in which networks of partners, suppliers and customers can easily work together across company boundaries to help do their jobs more efficiently and cost-effectively, and perhaps even develop lasting relationships," said Sean Poulley, Vice President, IBM Cloud Collaboration, in a release.

As Ariba Chairman and CEO Bob Calderoni says, what's now good for consumer commerce is soon to be good for the business side of the equation. It's simply the most efficient.

After IBM set its sights on Sterling, I at first wondered if IBM and Ariba might find themselves competing. But last Wednesday's deal shows that ecosystems rule. One-in-all cloud provider aspirants should take note. The way to making the network most valuable is by empowering the business (both sellers and buyers) to carve out what they want to do themselves.

IBM Lotus collaboration services plus Ariba's cloud and commerce network services seem to be striving to reach the right balance between providing a fertile arena and then getting out of the gardeners' way.

You may also be interested in: