Tuesday, July 21, 2009

Open Group conference shows how security standards and governance hold keys to enterprise cloud adoption

This BriefingsDirect guest post comes courtesy of Jim Hietala, vice president of security, The Open Group. You can reach him here.

By Jim Hietala

Spending the early part of this week in The Open Group Security Forum meetings, I have been struck by the commonality of governance, risk, compliance, and audit issues between physical IT infrastructure today, and virtual and cloud environments in the (very) near future. Issues such as:
  • Moving away from manual compliance processes, toward automated test, measurement, and reporting on compliance status for large IT infrastructure. When you are talking about physical infrastructure, manual compliance is difficult, expensive in labor cost, and sub-optimal -- given that many organizations choose to sample just a few representative systems for compliance, rather than actually testing the entire environment. When you are talking about virtual environments and cloud services, manual compliance processes just won’t work, automation will be key.

  • Incompatible log formats output by physical devices continues to be a problem for the industry that manifests itself in problems for security information and event management systems, log management systems, and auditors. Ditto for virtual and cloud environments, at much larger scale.

  • Managing security configurations across physical versus virtual and cloud environments provides similar challenges. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]
Emerging-standards work from the Security Forum, which was originally conceived as solutions for some of these issues in traditional IT environments (in house, physical servers), will have important applications in cloud and virtualization scenarios. In fact, with the scale and agility provided by these environments, it is hard to think about adequately addressing audit and compliance concerns without standards that provide for “scalable automation.”

The Automated Compliance Expert Markup Language standards initiative will address issues of security configuration and compliance alerting and reporting across physical, virtual, and cloud environments. The revised XDAS standard from The Open Group will address audit incompatibility issues. Both of these standards efforts are work-in-progress at the present time, and our standards process is truly and open one. If your organization is a customer organization grappling with these issues, or a vendor whose product might benefit from implementing these standards, we invite you to learn more.

This BriefingsDirect guest post comes courtesy of Jim Hietala, vice president of security, The Open Group. You can reach him here.

SOA and security: Are services the problem or the solution?

This guest post comes courtesy of Dr. Chris Harding, Forum Director for SOA and Semantic Interoperability at The Open Group. You can reach him here.

By Dr. Chris Harding

I’m with the SOA Work Group at The Open Group conference in Toronto this week (see http://www.opengroup.org/toronto2009-apc).

The Work Group has been busy recently, completing its Governance Framework, helping to complete The Open Group’s Service Integration Maturity Model, and working with members of OASIS and the OMG to finish the joint paper “Navigating the SOA Open Standards Landscape Around Architecture,” which explains how the architecture-focused SOA standards of these bodies relate to each other.

There was so much to do that we started our discussions last weekend, and we made good progress on our Practical Guide to Using TOGAF for SOA, and on our SOA Reference Architecture. Today we moved on to the thorny question of SOA and Security, which we discussed in a joint session with The Open Group's Security Forum. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Security is often seen as a major problem for SOA but – and this was the thread we pursued in today’s discussion – perhaps this is looking at the problem the wrong way round.

Certainly, there are security problems associated with service chains, where some of the services in the chain may be outside the control of – or even not known to – the consumer, and where the identity of the consumer may not be known to all the services in the chain.

But really these problems are due, not to the use of services, but to the use of distributed software modules with multiple owners. They would arise whether the underlying facilities were provided as services or in some other form – as object methods that can be invoked remotely, for example. They have become associated with SOA because that is the form that cross-domain distributed computing usually takes these days.

In fact, SOA gives us a way of addressing these security problems. Security is a matter of

And, where the consumer is in turn providing services to others, the analysis can help determine the contractual level of security that can reasonably be offered for those services.

assessing and mitigating risks. The service principle provides an excellent basis for doing this.

The consumer can ask questions that help establish the levels of risk.

“What services am I using?” “Who provides them?” “What level of security are they contracted to provide?” “How far do I believe that they can and will meet their contractual obligation?” The answers to such questions enable the consumer to decide what security mechanisms to deploy.

And, where the consumer is in turn providing services to others, the analysis can help determine the contractual level of security that can reasonably be offered for those services.

This is not to say that SOA solves the security problems of cross-domain distributed computing. These problems are difficult, and there are aspects – such as the lack of a commonly-accepted standard identity framework – that SOA does not address. But, looked at in the right way, it is a positive, rather than a negative, factor. And that’s something!

Harding is Forum Director for SOA and Semantic Interoperability at The Open Group. He has been with The Open Group for over ten years, and is currently responsible for managing and supporting its work on semantic interoperability, SOA, and cloud computing. Chris can be contacted at c.harding@opengroup.org.

Engine Yard launches robust Ruby cloud-based deployment platform service

Engine Yard is working to make life easier for Ruby on Rails developers. The San Francisco-based application automation and management start-up rolled out two new products on Monday with an eye toward the cloud.

Ruby on Rails is a Web programming framework that's rapidly emerging as one of the most popular ways to develop Web sites and Web applications. Popular Web 2.0 applications like Twitter, Hulu and Scribd are built using Ruby on Rails, and Ruby usage has increased by 40 percent in 2009 alone, according to Evans Data. Even though only 14 percent of developers are using Ruby, Evans predicts 20 percent will adopt the technology by 2010.

Engine Yard is preparing for Ruby growth in the next 12 months and beyond with its latest offerings: Engine Yard Cloud and Flex. Engine Yard Cloud is a services platform that leverages 100 man-years of experience deploying, managing and scaling some of the world's largest Rail sites and makes that know-how accessible to companies looking to run Rails in the cloud. Meanwhile, Flex is a cloud service plan for production-level Rails applications.

Tackling Tough Issues

What Engine Yard is, in effect, taking Ruby a step beyond application development. These new tools tackle tougher issues like deployment, maintenance, scalability, uptime and performance -- skills most developers either don't have or don't want to acquire. Cloud management solutions abound, but Engine Yard charging forward with a platform to specifically address the needs of developers building applications in Rails.

Unlike an infrastructure cloud, Engine Yard Cloud provides application-aware auto-scaling, auto-healing and monitoring and a highly optimized, pre-integrated Rails runtime stack. Engine Yard Cloud is also backed by 24x7 Premium Support from Engine Yard. It runs on Amazon EC2 infrastructure cloud.

Pricing for the Flex Plan starts at $349 per month. Pricing for Engine Yard Premium Support starts at $475 per month. Engine Yard Cloud will be generally available in August.

"Companies like Amazon and Rackspace are doing a good job at the hardware resource provisioning level," said Tom Mornini, CTO of Engine Yard. "But they don't actually help you with assembling your raw virtual machines, storage, object stores and file systems into an application architecture. Engine Yard Cloud is the layer on top of the hardware that helps you get from raw resources to functioning application architecture."

Under the Hood


With its Flex plan, Engine Yard Cloud serves customers running production applications that want to leverage the on-demand flexibility of a cloud but also need application-level scaling, reliability and support. With developer features like automated deployment from source check-ins, handling rapid application changes driven by agile development is easier for developers.

Behind the scenes, Engine Yard Cloud is automatically scaling applications. Engine Yard can

Engine Yard Cloud is the layer on top of the hardware that helps you get from raw resources to functioning application architecture.

come to the rescue of a site that's under stress or low in memory by adding more application capacity on the fly. Here's how it works: Essentially, the technology provisions a new Amazon virtual machine, lays down the operating system, lays down Ruby on Rails, lays down the source code, hooks it up with a load balancer, and assembles the monitoring so the developer -- who is not a systems administrator -- doesn't have to.

Engine Yard Cloud also offers reliability features to make sure sites don't go down, such as an automatic database replica and an auto-healing capacity in case a server fails in the application tier. Engine Yard Cloud even offers what it calls "one-click cloning" that lets developers duplicate production sites -- even if they are running 15 or 20 or more servers -- in order to perform testing or stage new code.

This is all coming together for integrated app-stack in one cloud automation. I expect this will also be of interest for private clouds. And I'm hip to the notion of personal cloud as a means to ease the deployment of robust apps.

Competing in the Cloud

On the Ruby front, Engine Yard has a strong position in the market. Engine Yard's competitors are Joyent, Rails Machine, Devunity and RailsCluster, among others.

But Engine Yard isn't just competing with vendors in the Ruby space. It's competing with other platforms. Google App Engine is doing something similar for Java. Microsoft is shipping Azure in November. Even if Engine Yard dominates on the Ruby front, there's still a battle for market share in cloud platforms.

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached here and here.

Friday, July 17, 2009

HP wraps up virtual event series with sessions on IT challenges and solutions

Hewlett-Packard (HP) is wrapping up it’s series of virtual conferences designed to give IT professionals online access to briefings on business and technology trends from HP executives and outside experts.

On July 28, HP will offer the HP Solutions Virtual Event for The Americas. The three-day session will feature 30 breakout sessions, seminars, presentations and demo theater presentations. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Registration for the even is free and, because it's presented entirely online, there are no travel expenses or out-of-office time involved. Also, the full conference will be available on replay.

Next week's breakouts will include four main IT themes -- application transformation, cloud services, services management, and improving data-center economics -- as well as two leadership themes -- green IT and cloud computing. The virtual presentation will also include chat sessions with the many prominent speakers.

The topic focus for each day will include:
  • July 28: Application Transformation

  • July 29: Service oriented IT: Service management, Cloud

  • July 30: Data Center Transformation: Virtualization, Green IT, Information Explosion
The full list of sessions is available on the virtual event Web site. Participants are free to attend for one session, one day, or the entire three-day event. These presentations and knowledge resources are not just for HP users, they make sense for the vast HP partner community and full ecology of related providers.

The speakers include a who's who of HP technology thought leaders, including many who are familiar to BriefingsDirect readers and listeners. These include John Bennett, Bob Meyer, Rebecca Lawson, Russ Daniels, Lance Knowlton, and Paul Evans, all of whom have appeared in BriefingsDirect podcasts.

For those interested, HP is providing an online demo that can be accessed prior to the event. The demo is available at: http://tsgdemo.veplatform.com/uc/registration-short-form.php

Registration for the event itself is at:
http://hpsolutionsforneweconomyvirtualevent.veplatform.com/uc/registration-short-form.php?mcc=ESYR

Cloud governance: something old, something new, something borrowed …

By Ron Schmelzer

This guest post comes courtesy of ZapThink. Ron Schmelzer is a senior analyst at ZapThink. You can reach him here.

As we predicted earlier in the year, cloud computing is starting to take hold, especially if you believe the marketing literature of vendors and consulting firms. Yet, we are seeing an increasing number of Cloud success stories, ranging from simplistic consumption of utility Services and offloading of compute resources to the sort of application and process clouds we discussed in a previous ZapFlash. Perhaps the reason why usage of the Cloud is still nascent in the enterprise is because of an increasing chorus of concerns being voiced about the usage of Cloud resources:

Cloud availability. Cloud security. Erosion of data integrity. Data replication and consistency issues. Potential loss of privacy. Lack of auditing and logging visibility. Potential for regulatory violations. Application sprawl & dependencies. Inappropriate usage of Services. Difficulty in managing intra-Cloud, inter-Cloud, and Cloud and non-Cloud interactions and resources. And that’s just the short list.

Do any of these issues sound familiar? To address these concerns, we have to return to a topic we’ve hashed over and again on the SOA side of things: governance. The above issues are primarily, if not exclusively, governance concerns. Thankfully, in many ways, we can apply what we’ve already learned, implemented, and invested in SOA Governance directly to issues of Cloud Governance. However, SOA and Cloud, while complementary, are not equivalent concepts. There are a wide range of patterns and usage considerations that are either new to the SOA Governance picture or ones that we were able to gloss over. To make Cloud computing a success, we need to make Cloud governance a success. So, what can we apply from our existing SOA governance knowledge, and what new things do companies need to consider?

Design-Time Cloud Governance


Designing Services to be deployed in the Cloud is much like designing Services for your own SOA infrastructure. In fact, that’s the point – most Cloud infrastructure providers, whether they are third-party Cloud providers like Amazon.com, or self-hosting Cloud infrastructure vendors, pitch the simplicity of Cloud Service development and deployment. However, within this simple mode lurks an insidious beast: if you thought it was hard to get your developers on the same page with regards to Service development when you owned your own SOA infrastructure and registry, try it when you have little visibility into the Service assets built by unknown developers. Like the early days of Web Services-centric SOA development, companies faced developers hacking out a wide array of incompatible “Just a Bunch of Web Services (JBOWS)” style Services thrown willy-nilly on the network, now to face the same issue in the Cloud. Of course, JBOWS doesn’t a SOA make, and neither does it a Cloud make.

Furthermore, with the simplicity of Cloud Service development, deployment, and consumption, developers can use Cloud capabilities undetected by IT management. It’s not unusual for a developer to dabble with an Amazon Machine Image (AMI) for a project.

Don’t want your sales and marketing folks using Cloud services? Good luck trying to prevent that. I wish you even more luck trying to get visibility into what they are doing.


Simply use a personal Amazon account and credit card and off you go! And to make matters worse, not everyone creating or consuming Cloud Services will even be from within the IT department. In a previous ZapFlash, I admonished IT to become more responsive to the business lest they become disintermediated. Don’t want your sales and marketing folks using Cloud services? Good luck trying to prevent that. I wish you even more luck trying to get visibility into what they are doing. Without adequate design-time Cloud governance, you’re up a croc-infested river without a paddle.

Making matters worse, SOA governance tools are often missing in the Cloud Computing environment. There’s no central point for a Cloud consumer/developer to view the Services and associated policies. Furthermore, design-time policies are easily enforceable when you have control over the development and QA process, but those are notoriously lacking in the Cloud environment. The result is that design-time policies are not consistently enforced on client side, if at all. Clearly, SOA governance vendors and best practices need to step up to the plate here and apply what we already know about SOA registries/repositories and governance processes to give the control that’s needed to avoid chaos and failure. This means that IT needs to provide the enterprise a unified, Service-centric view of IT environment across the corporate data center and the Cloud.

Run-Time Cloud Governance

Making matters worse are a collection of run-time and policy issues that are complicated by the fog of Cloud computing infrastructure. Data reside on systems you don’t control, which may be in other countries or legal jurisdictions. Furthermore, systems are unlikely to have the same security standards as you have internally. This means that your security policies need to be that much more granular. You can’t count on using perimeter-based approaches to secure your data or Service access. Every message needs to be scrutinized and you need to separate Service and data policy definition from enforcement. The Cloud doesn’t simplify security issues – it complicates and exacerbates them. However, there’s nothing new here. Solid SOA security approaches, such as those we espouse in our LZA Boot Camps have always pushed the “trust no one” approach, and the Cloud is simply another infrastructure for enforcing these already stringent security policies.

In addition, Cloud reliability is pretty much out of your hands. What happens if the Cloud Service is not available? What happens if the whole Cloud is unavailable? Now you don’t only need to think about Service failure, but whole Cloud failover. Will you have an internal SOA infrastructure ready to handle requests if the Cloud is unavailable? If you do, doesn’t that entirely kill the economic benefit of Cloud in the first place? An effective Cloud governance approach must provide the means to control, monitor, and adapt Services, both with on-premises and Cloud-based implementations, and needs to provide consistency across internal SOA & cloud SOA. You should not keep your business (or IT) Service consumers guessing as to whether a Service they are consuming is inside the network or in the Cloud. The whole point of loose coupling and the Cloud is location independence. To make this concept a reality, you need management and governance that spans SOA infrastructure boundaries.

Yet, there’s more to the runtime Cloud governance picture than management and policy enforcement. Data and compliance issues can be the most perplexing. Most third-party Cloud providers provide little, if any, means to do the sort of auditing and logging that’s demanded from most compliance and regulatory requirements, let alone your internal auditing needs. Companies need to intentionally compose all Cloud Services with internal

One way to solve this problem is through the use of network intermediaries and gateways that keep a close eye on traffic between the corporate network and the Cloud.

auditing and logging Services deployed on the Cloud (or preferably) local network, negotiate better access to logging data from the Cloud provider, and implement policies for Cloud Service use to control leakage of private information to the Cloud. Furthermore, companies need to implement usage policies to control the excessive, and potentially expensive, use of Cloud Services in unauthorized ways.

One way to solve this problem is through the use of network intermediaries and gateways that keep a close eye on traffic between the corporate network and the Cloud. Intermediaries can scan cloud-bound data for leakage of private or company-sensitive data, filter traffic sent up to cloud platforms, apply access policies to Cloud Services, provide visibility into authorized and unauthorized usage of Cloud Services, and prevent unsanctioned use of Cloud Services by internal staff, among other benefits. Of course, these benefits do not extend to intra-Cloud Service consumption, but can provide a lowest common denominator of runtime governance required by the organization.

Change Management and Cloud Governance

Finally, the last major Cloud governance issue is one of change management. How do you prevent versioning of Cloud Services or even Cloud infrastructure from having significant repercussions? Proper Cloud governance techniques need to lift a page from the SOA governance book and deal with versioning at all levels: Service implementation, contract, process, infrastructure, policy, data, and schema. If you can deal with these inside the network and in the Cloud, you’re golden. If you have any gaps, you’re just itching for trouble.

But the biggest bugaboo here is testing. There simply aren’t many good approaches for testing a Cloud-implemented Service other than to do it in the live, Cloud “production” environment. Indeed, we usually get rotten tomatoes thrown at us when we teach in our LZA boot camps that it is increasingly ineffective to test SOA implementations in a QA environment as the SOA implementation becomes more mature, but now we just get blank stares when we ask if there’s such thing as a Cloud “QA” environment. Of course not. The same approach applies to SOA testing as Cloud testing: test your Services in a live environment by making sure that failures are self-contained and that automated fall-back mechanisms exist. If it can work in your own SOA environment, it can work in the Cloud… and vice-versa.

The ZapThink Take

SOA is an architectural approach and philosophy guiding the development and management of applications. Cloud is a deployment and operational model suited to host certain types of Services within an existing SOA initiative. The Cloud concept within the SOA context is one of Service infrastructure, implementation, composition, and consumption. The SOA concept within the Cloud context is one of application-level abstraction of Cloud resources. Therefore, think of Cloud Governance as evolved SOA governance.

Companies with a proper SOA governance hat on should have few problems as they move to increasingly utilize Cloud services, but those who have failed to take either an architectural perspective on Cloud or have glossed over SOA governance issues will be forced to quickly get a SOA perspective to get things right. In order for these both to work together, companies need to have a consistent SOA and Cloud Governance strategy. To address these issues, ZapThink recently launched our SOA and Cloud Governance training & certification workshops. By addressing each of the issues and potential solutions discussed above, we plan to dive deeper than anyone else has into this topic. We hope to see you there and continue the conversation and movement to SOA and Cloud success!

This guest post comes courtesy of ZapThink. Ron Schmelzer, a senior analyst at ZapThink, can be reached here.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

The cloud gets up close and personal

Can you fit a cloud on your laptop?

Probably not.

But you can mock up basic cloud services, such as those for a shopping cart application, on your PC so you can see how the Web app you are working on will interact when it eventually reaches out and touches the real cloud, says says Chris Kraus, product manager for iTKO, the testing software vendor, which offers tooling for recording or mocking cloud services.

He sees growing interest among customers for the personal cloud concept, allowing developers to code and test Web applications that will eventually interact with services in the cloud. Cloud services on a PC provide two major advantages for developers during coding and testing, he says. [Disclosure: iTKO is a sponsor of BriefingsDirect podcasts].

First, the developer working on a cloud application is free to work anywhere, anytime regardless of whether the real cloud services are available or accessible. If a cloud service for a shopping cart is down for some reason, developers are not impacted since their version of the service is on their laptop. They can also code when they are on a plane, or in another environment with no access to the cloud.

Second, although this is probably first in the minds of budget conscious IT managers, the developer is not running up charges for accessing the cloud services, Kraus says.

“If the services are hosted on a cloud from a third party and I have to maintain physical

What developers could use is a Personal Cloud that would allow them to configure their local environment in multiple ways and take it with them wherever they go

connectivity, I have to pay to do that,” he said. “If I have a personal cloud on my desktop, I can take development offline, interact with those services, make sure my HTML is tight, and do all the stuff that is important to me. Then I point it to the real cloud and actually get the development up.”

Mike Gualtieri, senior analyst at Forrester Research, also sees value in the personal cloud concept.

In a recent post on his blog, Developers Need A Personal Cloud, the analyst also sees the value, in terms of portability.

“What developers could use is a Personal Cloud that would allow them to configure their local environment in multiple ways and take it with them wherever they go,” he writes. “I know this sounds like virtualization and it is to some extent, but extend PC virtualization with cloud concepts and you get the Personal Cloud.”

One commenter on Gualtieri’s blog suggests this concept might be dubbed "local virtualization."

I had an intriguing chat with HP's Jeff Meyers and iTKO chief scientist and co-founder John Michelsen last month at HP's Software Universe conference. The confluence of SaaS and cloud with application development and the test phase is changing rapidly, we observed.

Compressing the test phase into the development and production becomes more feasible. And as virtualization becomes more common, building an application or service in its own runtime stack bubble from inception to sunset starts to make sense. OSGi fits into the vision nicely.

And while we're combining all the elements of an application and platform from cradle to grave, why not tune the whole package before, during and after development too ... then load the entire package as a portable cloud-supported production unit?

Now, that's a "personal" cloud (I prefer cloud service nodule), but with high service performance output, and far less time in cost in the total lifecycle. Higher overall quality too. What do you think?

BriefingsDirect contributor Rich Seeley provided research and editorial assistance on this post. He can be reached at RichSeeley@aol.com.

Wednesday, July 15, 2009

Panda's SaaS-based PC security manages client risks, adds efficiency for SMBs and providers

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com.

Download the transcript. Learn more. Sponsor: Panda Security.

PC security has proven a thorny and expensive problem for users, small businesses, enterprises and managed services providers alike for many years.

But PC security can be increasingly enhanced -- with a cloud-enhanced trouble discovery-and-remediation lifecycle approach -- and delivered as services. This reduces the strain on the PC itself, as well as improves the ability to staunch malware problems quickly before they spread.

As a result, new offerings around cloud-based anti-virus and security protection services are on the rise.

Furthermore, Internet-delivered security -- from the low-touch client agent to the fuller managed services -- provides a strong business opportunity for resellers and channel providers. A fuller such solution then allows small and larger businesses to protect all of their PCs, regardless of location, at decreasing -- rather than increasing -- total costs.

To help delve more deeply into the benefits of security as a service, and explore the cloud strengths of managing malware protection more centrally from the Web, I recently moderated a discussion with independent IT analyst Phil Wainewright, director of Procullux Ventures and a ZDNet SaaS blogger, as well as Josu Franco, director of the Business Customer Unit at Panda Security.

Here are some excerpts:
Franco: There are two basic problems that we're trying to solve here, problems which have increased lately. One is the level of cyber crime. There are lots and lots of new attacks coming out every day. We're seeing more and more malware come into our labs. On any given day, we're seeing approximately 30,000 new malware samples that we didn't know about the day before. That's one of the problems.

The second problem that we're trying to solve for companies is the complexity of managing the security. You have vectors for attack -- in other words, ways in which a system can be infected. If you combine that with the usage of more and more devices in the networks, that combination makes it very difficult for administrators to really be on top of the security.

In order to address the first problem ... we need to take an approach that is sustainable over time. ... We found the best approach is to move processing power into the cloud, ... to process more and more malware automatically in our labs. That's the part of cloud computing that we're doing.

In order to address the second problem, we believe that the best approach for most companies is via management solutions that are easier to administer, more convenient, and less costly for the administrators and for the companies.

We don't see the agents disappearing any time soon to protect the [PC] endpoints. [But by] rebuilding the endpoint agent from scratch, ... we get a much lighter agent, much faster than previous agents. And, very importantly, an agent that is able to leverage the cloud computing capacity that we have, which we call "Collective Intelligence," to process malware automatically.

We've just released this very first version of the Cloud Antivirus agent. We're distributing it for free with the idea that first we want people to know about it. We want people to use it, but very importantly, the more people that are using it, the better protected they're all going to be.

Special offer: Download the free protection.

Once you've downloaded this agent, which works transparently for the end user, all the management takes place via SaaS. ... We believe that the more intelligence that we can pack into the agent, the better, but always respecting the needs of consumers -- that is to be very fast, to be very light, to be very transparent to them.

[Next we provide] ... a management console [Panda Managed Office Protection] that's hosted from our infrastructure, in which any admin, regardless of where they are, can manage any number of computers, regardless of where they are located.

This works by having every agent talk to this infrastructure via Internet, and to talk to other agents, which might be installed in the same network, distributing updates or distributing other types of polices.

Wainewright: To be honest, I've never really understood why people wanted to tackle Web-based malware in an on-premise model, because it just doesn't make any sense at all. The attacks are coming from the Web. The intelligence about the attacks obviously needs to be centralized in the Web. It needs to be gathering information about what's happening to clients and to instances all around the Web, and across the globe these days.

Really making sure that the protection is up-to-date with the latest intelligence and is able to react quickly to new threats as they appear means that you've go to have that managed in the center, and the central management has got to be able to update the PCs and other devices around the edge, as soon as they've got new information.

... The malware providers are already using network scale to great effect, particularly in the use of these zombie elements of malware that effectively lurk on devices around the Web, and are called into action to coordinate attacks.

You've got these malware providers using the collective intelligence of the Web, and if the good guys don't use the same arsenal, then they're just going to be left behind.

... More and more, in large enterprises, but also in smaller businesses, we're seeing people turning to outside providers for expertise and remote management, because that's the most cost effective way to get at the most up-to-date and the most proficient knowledge and capabilities that are out there.

Franco: In the current economic times, more and more resellers are looking to add more value to what they are offering. For them, margins, if they're selling hardware or software licenses, are getting tougher to get and are being reduced. So, the way for them to really see the opportunity into this is thinking that they can now offer remote management services without having to invest any amount in what is infrastructure or in any other type of license that they may need.

It's really all based on the SaaS concept. [Managed service providers] can now say to the customers, "Okay, from now on, you'll forget about having to install all this management infrastructure in-house. I'm going to remotely manage all the endpoint security for you. I'm going to give you this service-level agreement (SLA), whereby I'm going to check the status of your network twice or three times a week or once a day, and if there is any problem, I can configure it remotely, or I can just spot where the problems are and I can fix them remotely."

This means that for the end user it's going to reduce the operating cost, and for the reseller it's going to increase the margins for the services they're offering. We believe that there is a clear alignment among the interests of end users and partners, and, most importantly, also from our side with the partners. We don't want to replace the channel here. What we want is to become the platform of choice for these resellers to provide these value-added services.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com.

Download the transcript. Learn more. Sponsor: Panda Security.

Tuesday, July 14, 2009

Rethinking virtualization: Why enterprises need a sustainable virtualization strategy over hodge-podge approaches

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion. Download a pdf of this transcript.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.

Enterprises today need a better way to prevent server sprawl and complexity that can impact the cost of virtualization projects. Three important considerations are instrumental for effective enterprise virtualization adoption, and they often amount to a rethinking of virtualization.

For example, one important question is, How do enterprises manage and control how network interconnections are impacted by widespread virtualization? Second, how can configuration management databases (CMDBs) help in deploying virtualized servers? And third, how can outsourcing help organizations get the most bang for their virtualization buck?

Rethinking virtualization becomes necessary to attain a sustainable enterprise virtualization strategy because virtual machines (VMs) present unique challenges.

To get to the bottom of the larger, pro-active means of virtualization planning, I recently interviewed three executives from HP: Michael Kendall, worldwide Virtual Connect marketing lead; Shay Mowlem, strategic marketing lead for HP Software and Solutions, Ryan Reed, a product manager for EDS Server Management Services.

Here are some excerpts:
Mowlem: Certainly, many companies today have recognized that consolidating their infrastructure through virtualization can reduce power consumption and space utilization, and can really maximize the value of the infrastructure that they’ve already purchased.

Just about everybody has jumped on the virtualization bandwagon, and many companies have seen tremendous gains in their development in lab environments, in managing what I would consider to be non-mission-critical production systems. But, as companies have tried to apply virtualization to their Tier 2 and Tier 1 mission-critical systems, they're discovering a whole new set of issues that, without effective management, really run counter to the cost benefits.

... For IT to realize the large-scale cost benefits of virtualization in their production environments they need to prove to the business that the service performance and the quality are not going to be lost. ... The ideal approach should include a central vantage point, from which to detect, isolate, and prevent service problems across all infrastructure elements, heterogeneous servers, spanning physical and virtual network storage, and all the subcomponents of a service.

We provide tools today that offer native discovery and dependency mapping of all infrastructure, physical and virtual, and then store that information in our central universal configuration management database (UCMDB), where we then track the make-up of a business service, all of the infrastructure that supports that service, the interdependencies that exists between the infrastructure elements, and then manage that and monitor that on an ongoing basis. ... Essentially a configuration database attracts all of the core interdependencies of infrastructure and their configuration settings over time

Kendall: When you consolidate a lot of different application instances that are normally on multiple servers, and each one of those servers has certain number of I/O for data and storage and you put them all on one server, that does consolidate the number of servers we have.

[It also] has the tendency to expand the number of network interface controllers (NICs) that you need, the number of connections you need, the number of cables you need, and the number of upstream switch ports that you need. ... Just because you can either set up a new virtual machine or want to migrate virtual machines in a matter of minutes, it isn’t as easy in the connection space. Either you have to add additional capacity for networks and for storage, add additional host bus adapters (HBAs), or add additional NICs.

We did some basic rethinking around how to remove some of these interconnect bottlenecks. HP Virtual Connect actually can virtualize the physical connections between the server, the data network, and the storage network. Virtualizing these connections allows IT managers to set up, move, replace, or upgrade blade servers and the workloads that are on them, without having to involve the network or storage folks or being able to impact the network or storage topologies.

Reed: Business services today demand higher levels of uptime and availability. Those data centers, if they were to fail due to a power outage or some other source of failure, are no longer able to provide the uptime requirements for those types of business services. So, it’s one of the first questions that a virtual infrastructure program raises to the program manager.

Does the company or the organization have the skill set necessary in-house to do large-scale virtualization in data center modernization projects? Often times, they don’t, and if they don’t, then what is their action? What is their remedy? How are they going to resolve that skill gap?

... [And there's] a hybrid model, which would be one where virtual infrastructures and non-virtual infrastructures can be managed from either client or organization-owned data center -- or the services provider data center. There are various models to consider. A lot of the questions that lead into how to plan for this type of virtual infrastructure also lead into a conversation about how an outsourcer can be the most value-add.

Outsourcers nowadays are very skilled at providing infrastructure services to virtual server environments. That would include things like profiling, analysis planning, mapping of targets to source servers, and creating a business value for understanding how it’s going to impact the business in terms of ROI and total cost of ownership.

Choose the right partner, and they can grow with you. As your business grows and as you expand your market presence, choosing the services provider that has the capability and capacity to deliver in the areas that you want to grow makes the most sense.

The traditional outsourcing model is one where enterprises realize that the data center itself is not a strategic asset to the business anymore. So they move the infrastructure to an outsourcer data center where the services provider, the outsourcing company, can provide the best services with virtual infrastructures during the design and plan phase. ... We’ve been doing this for 45 years, and it’s really the critical piece of what we do.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion. Download a pdf of this transcript.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.