Wednesday, April 14, 2010

Private cloud models: Moving beyond static grid computing addiction

This guest post comes courtesy of Randy Clark, chief marketing officer at Platform Computing.

By Randy Clark


People don’t talk much about grid computing much these days anymore, but most application teams that require high performance from their infrastructure are actually addicted to grid computing -- whether they know it or not.

Gone are the days of requiring a massive new SMP box to get to the next level of performance. But in today’s world of tight budgets and diverse application needs, the linear scalability inherent in grid technologies becomes meaningless when there are no more blades being added.

This constraint has led grid managers and solution providers to search for new ways to squeeze more capacity from their existing infrastructures, within tight capital expenditure budgets. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

The problem is that grid infrastructures are typically static, with limited-to-no flexibility in changing the application stack parameters – such as OS, middleware, and libraries – and so resource capacity is fixed. By making grids dynamic, however, IT teams can provide a more flexible, agile infrastructure, with lower administration costs and improved service levels.

So how do you make a static grid dynamic? Can it be done in an easy-to-implement and pragmatic, gradual way, with limited impact on the application teams?

By introducing private cloud management capabilities, armed with standard host repurposing tools, any types of grid deployments can go from being static to dynamic.

For example, many firms have deployed multiple grids to serve the various needs of application teams, often using grid infrastructure software from multiple vendors. Implementing a private cloud enables consolidation of all the grid infrastructures to support all the apps through a shared pool approach.

The pool then dynamically allocates resources via each grid workload manager. This provides a phased approach to creating additional capacity through improved utilization, by sharing infrastructure without impacting the application or cluster environments.

The beginning of queue sprawl

T
ake another example. What if the grid teams have already consolidated using a single workload manager? This approach often results in “queue sprawl,” since resource pools are reserved exclusively for each application’s queues.

But by adding standard tools, such as virtual machines (VMs) and dual-boot, resources can be repurposed on demand for high priority applications. In this case, the private cloud platform instructs on which application stack image should be running at any given time. This results in dynamic application stacks across the available infrastructure, such that any suitable physical machine in the cluster can be repurposed on demand for additional capacity.

While many grid professionals consider their grid environments cloud-like already, the advent of cloud computing nonetheless helps make grid environments completely dynamic



Once an existing grid infrastructure is made dynamic and all available capacity is put to use, grid managers can still consider other non-capital spending sources to increase performance even further.

The first step is to scavenge internal underutilized resources that are not owned by the grid team. The under-used resources can range from employee desktop PCs, to VDI farms, and disaster recovery infrastructure or low-priority servers. From these, grid workloads can be launched within a VM on the "scavenged" machines, and then immediately stopped when the owning application or user resumes.

The second major step is to these higher levels of infrastructure productivity direct IT operating budget to external services such as Amazon EC2 and S3. A private cloud solution can centrally manage the integration with and metering of public cloud use (so-called hybrid models), providing additional capacity for “bursty” workloads, or full application environments. And since access to the public cloud is controlled and managed by the grid team, application groups are provided via a seamless service experience -- with higher performance for their total workloads.

While many grid professionals already consider their grid environments cloud-like, the advent of mature cloud computing models can help make grid environments more completely dynamic, providing new avenues for agility, service improvement and cost control.

And by squeezing more from your infrastructure before spending operating budget on external services, you can protect your investment while satisfying users’ insatiable appetite for more performance from the grid.

This guest post comes courtesy of Randy Clark, chief marketing officer at Platform Computing.

You may also be interested in:

Monday, April 12, 2010

Enterprise IT plus social media plus cloud computing equals the future

Two developments last week really solidified for me the collision course between social media concepts and traditional enterprise IT. This is by no means a train wreck, but rather a productive, value-add combination that is sure to make IT departments more responsive to the needs of the businesses and the customers they mutually support.

First, IT consultancy Hinchcliffe & Co. was acquired by Dachis Group. This mashes up Dachis's "social business design" professional services offerings with Hinchcliffe's Enterprise 2.0 architecture, methods and implementations.

The merger shows that social media-enabled business activities need the full involvement of core IT, and that IT has a new and increasingly important role in designing how corporations will find, reach, connect to and service their customers, partners, suppliers -- and the various communities that surround them all.

Terms of the sale for both of the privately held firms was not disclosed, but Hinchcliffe founder and Dion Hinchcliffe told me he'll be helping Dachis Group harness the efficiencies and reach of social media through Enterprise 2.0 for global 2000 corporations.

As he sees it (and I agree), the ability for IT to use rich Internet application technologies, SaaS, cloud, SOA, business intelligence, social-media driven end-user meta data -- all leveraged via SOA integrated, governed and automated business processes -- are changing the nature of business. Companies now know that they can (and should) do business differently, but they don't yet know how to pull all the services and parts together to do it. Same for marketing execs.

Time for IT and marketing to get to know each other better. IT organizations and Enterprise 2.0 methods are increasingly aligned to integrate and leverage the traditional IT strengths with the best of the web, social and marketing. Doing an end-run around IT for advanced marketing is a stop-gap measure, the real solution is bringing IT and web/social/marketing together.

You can't have meaningful and scalable social business strategy at global 2000 firms without the firm hand of IT, newly endowed with modern architectures and tools, on the tiller. A firm like Dion's makes that essential but so far rare connection between the IT culture and the social media marketing pioneers.

"This gets us poised for what happens next: The coming half-decade is going to be a tremendously important and exciting one in the business world as organizations look to fundamentally retool for the 21st century, an era that has quite different expectations and requirements around business and how it gets done," said Hinchcliffe.

The Dachis Group, founded by Jeff Dachis (former Razorfish CEO) in 2008 and well-funded by Austin Ventures, is growing quickly and doing considerable acquiring, including Headshift Ltd. last year. Dion will join as Dachis as senior engagement manager, reporting to Peter Kim, managing director of North American operations.

Another indication of this mega mashup between technology and social media: Salesforce.com's expansion of the private beta testing of its Salesforce Chatter, a Facebook-style social networking platform for enterprises and SMBs. And now AppExchange 2, the next generation of Salesforce's enterprise app storefront, will includes a "ChatterExchange" for social networking business apps.

I saw a demo of Chatter last month at Salesforce headquarters is San Francisco. It has the potential to do what Google Wave does only better, and more targeted as business functions. If I were Lotus, I'd be concerned.

From all this I see a business world soon that no longer begins and ends its days in an email in-box, or portal, but on the "wall" of precisely filtered flow that defines the business process through a social interactions lens, and not a back-office application interface. And that wall can be easily adjusted based on the users activities, policies, etc. Just about anything can be added, or not.

I'm not alone in this vision, of course. Salesforce last week in a New York press conference rolled out "Cloud 2," which has enterprise apps behaving like Twitter, Facebook, or YouTube.

[Incidentally, my old Gillmor Gang cohort and founder, Steve Gillmor, today joins Salesforce.com after leaping and hopping from a rag tag bunch of podcast and blog sites. Congrats, Steve.]

Yep, social networking meets the enterprise. Kind of like chocolate and peanut butter.

Thursday, April 8, 2010

Private cloud computing nudges enterprises closer to 'IT as a service', process orientation and converged infrastructure

S0-called "private cloud computing" actually consists of many maturing technologies, a variety of architectural approaches, and a slew of IT methodologies, many of which have been in development for 20 years or more.

In many ways, the current popularity of cloud computing models marks an intersection of different elements of IT development and a convergence of infrastructure categories. That makes cloud interesting, relevant, and potentially dramatic in its impact. It also makes cloud complex, in terms of attaining the intended positive results.

Yet private cloud adoption -- which I believe is just as important as "public" cloud sourcing options -- may be challenging to implement successfully at strategic or even multiple tactical level. Cloud concepts will most certainly enter into use in many different ways, and, perhaps, uniquely for each adopting organization. So the question is how private cloud adoption can be approached intelligently, flexibly, and with far higher chance of positive and demonstrable business benefit.

The ideas between private and public cloud are pretty similar. You want to be able to deliver and consume a service quickly over the Internet.



I recently has a chance to discuss the anticipated impact of private cloud models and how enterprises are likely to implement them with two HP executives, Rebecca Lawson, director of Worldwide Cloud Marketing at HP, and Bob Meyer, worldwide virtualization lead in HP's Technology Solutions Group. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP also recently delivered a virtual conference on cloud computing. Our discussion came in the lead-up to that conference.

Here are some excerpts:
Rebecca Lawson: Cloud is a word that's been overused and overhyped and we all know it. One of the reasons it's been so popular is because it has a connotation that any kind of cloud service is one that you can access easily over the Internet, by yourself, self-service, and pay for what you use. That's the standard definition of a cloud service.

The ideas between private and public cloud are pretty similar. You want to be able to deliver and consume a service quickly over the Internet. How they're implemented, of course, is quite different. A typical enterprise IT organization has to support different types of applications and workloads, and in the public cloud, most of the providers are pretty specialized in their requirements.

There are lots of different ways of creating, buying, or utilizing different kinds of technology-enabled services. They might be hosted. They might be cloud services. They might be mainframe-based services. They might be homegrown applications. Step one, when you think about private cloud, is to think about, "What services do I need to deliver, how should I deliver them, and how can I make sure that my consumers can have easy access to them when they need them?"

Bob Meyer: Traditionally, what IT has done is delivered built-to-order services. Somebody from a line of business comes to you and says that they need this specific application. Or, somebody in the test environment says that they need a test bed. As the IT supplier internal to the company, it's your job to get together the storage, the server, the network, the apps, and the data. You do all the plumbing yourself and provide that for that specific service.

In the private cloud or public cloud conversation, you will use an IT provider who will likely be providing a mix of services from this point out -- built-to-order, private cloud, public cloud managed services.

The job is to decide what's best for your organization from that mixed bag of services. Which services are right for which delivery model? Which ones make most sense for the business? So, the built-to-order will become less popular, as cloud becomes more prevalent, we believe, but they will certainly co-exist for quite a while.

Nobody can afford to rip and replace these days.



Lawson: Nobody can afford to rip and replace these days, and we don't think that's really necessary. What's necessary is a shift in how you think about things. Think about all the pools of equipment you have. You've got network stuff, server, storage, people, and processes. They tend to be fairly siloed and pretty complex, because you're supporting so many services and so many apps.

In this day and age, you have to get very direct with what technology-enabled services you provide and why, and what's the most efficient means of doing so. One of the great things about the cloud is that it has allowed the whole universe of service providers to expand and specialize.

Companies that are seizing this opportunity and saying, "We're going to take advantage of technology and use it in a proactive way to help build our organization," are doing so in a very aggressive way right now, because they have more choices and can afford to pick the right service to get a certain outcome out of it.

What you want to achieve

A
lot of it depends on what you want to achieve. If what you're going for is to create an environment where every service IT delivers can be easily consumed by people in the lines of business through a service catalog, there are two ways to approach it. One is from the bottom-up, from your infrastructure, your network, your compute, your storage. You need to set yourself up so your services can be sharable.

That means that instead of having dedicated infrastructure components for each application or service, you pool and converge those elements, so that anytime you want to instantiate a service, you can make it easily provisioned and you can make it sharable. That's the bottom-up approach, which is valid and required.

The top-down approach is to say, "How can we make our services consumable?" That means there's a consumer who's a business person, maybe a salesperson, people in accounting, or what have you. They're your consumers.

They want to be able to come into a menu or a portal and order something, just as they'd order something at Starbucks, where they say, "I want this. Show me what my service levels are. Show me what the options are and what the costs are" Press the button, and it automatically goes out, gets the approval, does the provisioning, and you're ready to go.

The catalog becomes that linchpin. It's almost a conversation device.



You want to be able to do that from the top-down. That's not just the automation of it, but also the cultural shift. IT and people in the lines of business have to come together, sit at a table, and say, "What will be rendered in our service catalog? What are the things that you need to accomplish? Based on that, we're going to offer these services in our catalog."

The catalog becomes that linchpin. It's almost a conversation device. It forces IT and the lines of business to align themselves around a series of services and that becomes it. That's how IT establishes itself as a service provider. What I call the litmus test is having a service catalog that defines what people can use and, by inference, what they can’t be using.

A lot of companies -- and our own company, HP, is an example -- have certain policies about what can and can't be used, based on security, corporate policies, or what have you. An implication of moving in this direction is having the right control and governance around the technology services that get used and by whom they get used. Security around certain data access, identity control, and things like that, all come into play with this.

Meyer: Building a private cloud becomes another way you look at providing the best quality services to the business at the lowest cost.

So, if you look at all the things that your mandated to provide the business, you now have another option that says, "Is this a better way for me to be providing these services to the business? Do I drive out risk? Do I drive out cost? Do I drive up agility?" The more choices you have on the back end, if you take that longer term approach and look at private cloud in that context, it really does help you make smarter decisions and set up a more agile business.

Lawson: The real key there is to think about not so much about whether it's going to cost us or save us money, but rather, wouldn't it be great if you knew that for every service you could say how much money that service helped you make, how much revenue came in the door, or how much money that service helped you save?

Unrealistic metric

In a perfect state, you would know that for every service. Of course, that's unrealistic, but for a vast majority of the services that one offers, there should be a very distinctive value metric set up against that. Usually, that value metric out in the commercial world is that you've paid money for it.

Will you save money by establishing a private cloud? Well, yeah, you should. That should be pretty obvious. There should be some savings, if you're doing it right. If you've gone through a pretty structured process of consolidating, virtualizing, standardizing, and automating, it certainly will.

But, an even the better bang for the buck is saying, "With my portfolio of services, that happen to execute in a shared infrastructure environment, not only it might be really efficient, but I know what the business result of it is."

Meyer: Imagine if all the physical components that the servers and network connections, the storage capacity, even the powering of data center were virtualized in a way that can be treated as a pool of resources that you could carve up on demand and assign to different applications. You could automate it in a way to connect all the moving pieces to make the best use of the capacity you have and do that in a standardized way on top of fewer standardized parts.

That's what we mean by convergence in terms of infrastructure. Going back to the point we talked about before, rather than creating dedicated built-to-order infrastructure for every technology-enabled service, infrastructure is made available from adaptive pools that can be shared by any application, optimized, and managed as a service.

It's a great period of opportunity for companies to really harness the various elements and the various possibilities around technology-enabled services and then put them to work.



To get to that point, we mentioned the virtualization part, not just server virtualization but virtualizing the connections between compute, storage, and network and making sure that they could be connected, reconnected, unconnected, on demand, as the services demand. They have to be resilient. You have to build in the resiliency into that converged infrastructure from disaster recovery to things like nonstop fault tolerance.

Lawson: It's a great period of opportunity for companies to really harness the various elements and the various possibilities around technology-enabled services and then put them to work. We help companies do this in any number of ways. From the process and organizational point of view, we've got a lot of ITIL expertise, COBIT, and all kinds of governance and service management expertise within HP.

We help train organizations and we, of course, have a very large services organization, where we outsource these capabilities to enterprises across the globe. We also have a real robust software portfolio that helps companies automate practically every element of the IT function and systems management, literally from the business value of a service all the way down to the bare-metal.

So, we're able to help companies instrument everything, starting with where the money is coming from, and make sure that everything down the line -- the servers, the storage, the networks, and the information -- are all part of the equation. Of course, we offer companies different ways of consuming all of this.

We have products and services that we sell to our customers. We have ways of helping them get these capabilities through our managed services, through the organization previously known as EDS, which is now called Enterprise Services and licensed products, software-as-a-service (SaaS) products, infrastructure as a service (IaaS), all kinds of stuff.

It really depends on each individual customer. We look at their situation and say, "Where are you today, where do you want to get to, and how can we optimize that experience and help you grow into a more efficient, responsive IT organization?"
You may also be interested in:

Wednesday, April 7, 2010

Well-planned data center transformation effort delivers IT efficiency paybacks, green IT boost for Valero Energy

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

There's a huge drive now for improved enterprise data center performance. Nearly all enterprises are involved nowadays with some level of data-center transformation, either in the planning stages or in outright build-out.

We're seeing many instances where numerous data centers are being consolidated into a powerful core few, as well as completely new, so-called green-field, data centers with modern design and facilities coming online. The heightened activity runs the gamut from retrofitting and designing new data centers to the building and occupying of them.

The latest definition of data center is focused on being what's called fit-for-purpose, of using best practices and assessments of existing assets and correctly projecting future requirements to get that data center just right -- productive, flexible, efficient and well-understood and managed.

Yet these are, by no means, trivial projects. They often involve a tremendous amount of planning and affect IT, facilities, and energy planners. The payoffs are potentially huge, as we'll see, from doing data center design properly -- but the risks are also quite high, if things don't come out as planned.

This podcast examines the lifecycle of data-center design and fulfillment by exploring a successful project at Valero Energy Corp. We're here with two executives from HP and an IT leader at Valero Energy to look at proper planning, data center design and project management.

Please join me in welcoming Cliff Moore, America’s PMO Lead for Critical Facilities Consulting at HP; John Bennett, Worldwide Director of Data Center Transformation Solutions at HP, and John Vann, Vice President of Technical Infrastructure and Operations at Valero Energy Corp. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bennett: If you had spoken four years ago and dared to suggest that energy, power, cooling, facilities, and buildings were going to be a dominant topic with CIOs, you would have been laughed at. Yet, that's definitely the case today, and it goes back to the point about IT being modern and efficient.

Data-center transformation, as we've spoken about before, really is about not only significantly reducing cost to an organization -- not only helping them shift their spending away from management and maintenance and into business projects and priorities -- but also helping them address the rising cost of energy, the rising consumption of energy and the mandate to be green or sustainable.

Data-center transformation tries to take a step back, assess the data center strategy and the infrastructure strategy that's appropriate for a business, and then figure how to get from here to there. How do you go from where you are today to where you need to be?

You have organizations that discover that the data centers they have aren't capable of meeting their future needs. ... All of a sudden, you discover that you're bursting at the themes. ... [You] have to support business growth by addressing both infrastructure strategies, but probably also by addressing facilities. That's where facilities really come into the equation and have become a top-of-mind issue for CIOs and IT executives around the world.

You'll need a strong business case, because you're going to have to justify it financially. You're going to have to justify it as an opportunity cost. You're going to have to justify in terms of the returns on investment (ROIs) expected in the business, if they make choices about how to manage and source funds as well.

Growth modeling

One of the things that's different today than even just 10 years ago is that the power and networking infrastructure available around the world is so phenomenal, there is no need to locate data centers close to corporate headquarters.

You may choose to do it, but you now have the option to locate data centers in places like Iceland, because you might be attracted to the natural heating of their environment. It's a good time [for data center transformation] from the viewpoint of land being cheap, but it might be a good time in terms of business capital.

Moore: The majority of the existing data centers out there today were built 10 to 15 years ago, when power requirements and densities were a lot lower.

People are simply running out of power in their data centers. The facilities today that were built 5, 10, or 15 years ago, just do not support the levels of density in power and cooling that clients are asking for going to the future, specifically for blades and higher levels of virtualization.

Some data centers we see out there use the equivalent of half of a nuclear power plant to run. It's very expensive.

It's also estimated that, at today's energy cost, the cost of running a server from an energy perspective is going to exceed the cost of actually buying the server. We're also finding that many customers have done no growth modeling whatsoever regarding their space, power, and cooling requirements for the next 5, 10, or 15 years -- and that's critical.

When a customer is looking to spend $20 million, $50 million, or sometimes well over a $100 million, on a new facility, you’ve got to make sure that it fits within the strategic plan for the business. That's exactly what boards of directors are looking for, before they will commit to spending that kind of money.

We’ve got to find out first off what they need -- what space, power, and cooling requirements. Then, based on the criticality of their systems and applications, we quickly determine what level of availability is required, as well.

This determines the Uptime Institute Tier Level for the facility. Then, we go about helping the client strategize on exactly what kinds of facilities will meet those needs, while also meeting the needs of the business that come down from the board. ... We help them collaboratively develop that strategy in the next 10 to 15 years for the data center future.

One of the things we do, as part of the strategic plan, is help the client determine the best locations for their data centers based on the efficiency in gathering free cooling, for instance, from the environment.

One of the things that the Valero is accomplishing is the lower energy costs, as a result of building their own data centers with a strategic view.

Vann: Valero is a Fortune 500 company in San Antonio, Texas and we're the largest independent refiner in the North America. We produce fuel and other products from 15 refineries and we have 10 ethanol plants.

We market products in 44 states with large distribution network. We're also into alternative fuel with renewables and one of the largest ethanol producers. We have a wind farm up in northern Texas, around Amarillo, that generates enough power to fuel our McKee refinery.

So what drove us to build? We started looking at building in 2005. Valero grew through acquisitions. Our data center, as Cliff and John have mentioned, was no different than others. We began to run into power,space, and cooling issues.

Even though we were doing a lot of virtualization, we still couldn't keep up with the growth. We looked at remodeling and also expanding, but the disruption and risk to the business was just too great. So, we decided it was best to begin to look for another location.

Our existing data center is on headquarters’ campus which is not the best place for the data center, because it's inside one of our office complexes. Therefore, we have water and other potentially disruptive issues close to the data center -- and it was just concerning considering where the data center is located.

[The existing facility] is about seven years old and had been remodeled once. You have to realize Valero was in a growth mode and acquiring refineries. We now have 15 refineries. We were consolidating quite a bit of equipment and applications back into San Antonio, and we just outgrew it.

We were having hard time keeping it redundant and keeping it cool. It was built with one foot of raised floor and, with all the mechanical inside the data center, we lost square footage.

We began to look for alternative places. We also were really fortunate in the timing of our data center review. HP was just beginning their build of the six big facilities that they ended up building or remodeling, and so we were able to get good HP internal expertise to help us as we were beginning our decision of design and building our data center.

The problem with collocation back in those days of 2006, 2007, and 2008, was that there was a premium for space.



So, we really were fortunate to have experts give us some advice and counsel. We did look at collocation. We also looked at other buildings, and we even looked at building another data center on our campus.

As we did our economics, it was just better for us to be able to build our own facility. We were able to find land northwest of San Antonio, where several data centers have been built. We began our own process of design and build for 20,000 square feet of raised floor and began our consolidation process.

Power and cooling are just becoming an enormous problem and most of this because virtualization blades and other technologies that you put in a data center just run a little hotter and they take up the extra power. It's pretty complex to be able to balance your data center with cooling and power, also UPS, generators, and things like that. It just becomes really complex. So, building a new data center really put us in the forefront.

We had a joint team of HP and the Valero Program Management Office. It went really well the way that was managed. We had design teams. We had people from networking architecture, networking strategy and server and storage, from both HP and Valero, and that went really well. Our construction went well. Fortunately, we didn’t have any bad weather or anything to slow us down; we were right on time and on budget.

Probably the most complex was the migration, and we had special migration plans. We got help from the migration team at HP. That was successful, but it took a lot of extra work.

Probably we'd put more project managers on managing the project, rather than using technical people to manage the project. Technical folks are really good at putting the technology in place, but they really struggle at putting good solid plans in place. But overall, I'd just say that migration is probably the most complex.

Bennett: Modernizing your infrastructure brings energy benefits in its own right, and it enhances the benefits of your virtualization and consolidation activities.

We certainly recommend that people take a look at doing these things. If you do some of these things, while you're doing the data center design and build, it can actually make your migration experience easier. You can host your new systems in the new data center and be moving software and processes, as opposed to having to stage and move servers and storage. It's a great opportunity.

It's a great chance to start off with a clean networking architecture, which also helps both with continuity and availability of services, as well as cost.



It can be a big step forward in terms of standardizing your IT environment, which is recommended by many industry analysts now in terms of preparing for automation or to reduce management and maintenance cost. You can go further and bring in application modernization and rationalization to take a hard look at your apps portfolio. So, you can really get these combined benefits and advantages that come from doing this.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Governance grows more integral to managing cloud computing security risks, says IT practitioner survey

Most enterprises lack three essential ingredients to ensure that sensitive information stored in via cloud computing hosts remains secure: procedures, policies and tools. So says a joint survey called “Information Governance in the Cloud: A Study of IT Practitioners” from Symantec Corp. and Ponemon Institute.

Cloud computing holds a great deal of promise as a tool for providing many essential business services, but our study reveals a disturbing lack of concern for the security of sensitive corporate and personal information as companies rush to join in on the trend,” said Dr. Larry Ponemon, chairman and founder of the Ponemon Institute.

Where is cloud security training?

Despite the ongoing clamor about cloud security and the anticipated growth of cloud computing, a meager 27 percent of those surveyed said their organizations have developed procedures for approving cloud applications that use sensitive or confidential information. Other surprising statistics from the study include:
  • Only 20% of information security teams are regularly involved in the decision-making process

  • Only 25% of information security teams aren’t involved at all

  • Only 30% evaluate cloud computing vendors before deploying their products

  • Only 23% require proof of security compliance

  • A full 75% believe cloud computing migration occurs in a less-than-ideal manner

  • Only 19% provide data security training that discusses cloud applications
Focusing on information governance

IT
vendors and suppliers, including the survey sponsor, Symantec, are lining up to help fill the evident gaps in enterprise cloud security tools, standards, best practices and culture adaptation. Symantec is making several recommendations for beefing up cloud security, beginning with ensuring that policies and procedures clearly state the importance of protecting sensitive information stored in the cloud.

“There needs to be a healthy, open governance discussion around data and what should be placed into the cloud,” says Justin Somaini, Chief Information Security Officer at Symantec. “Data classification standards can help with a discussion that’s wrapped around compliance as well as security impacts. Beyond that, it’s how to facilitate business in the cloud securely. This cuts across all business units.”

Symantec also recommends organizations adopt an information governance approach that includes tools and procedures for classifying information and understanding risk so that policies can be put in place that specify which cloud-based services and applications are appropriate and which are not.

“There’s a lot of push for quick availability of services. You don’t want to go through legacy environments that could take nine months or a year to get an application up and running,” Somaini says. “You want to get it up an running in a month or two to meet the needs and demands of consumers. Working the cloud into IT is very important from a value-add perspective, but it’s also important to make sure we keep an eye on compliance and security issues as well.”

Evaluating and Training Issues

B
eyond governance, there are also cloud security issues around third-parties and employee training that Symantec recommends incorporating into the discussion. Specifically, Symantec promotes evaluating the security posture of third parties before sharing confidential or sensitive information.

Companies should formally train employees how to mitigate the security risks specific to the new technology to make sure sensitive and confidential information is protected prior to deploying cloud technology, said Symantec.

The big question is: Are we getting closer to being able to offer cloud solutions with which enterprises can feel comfortable? Somaini says we’re getting close.

“It's really 'buyer-beware' from a customer perspective. Not all cloud providers are the same. Some work from the beginning in a conscious and deliberate effort to make sure their services are secure. They can provide that confidence in the form of certifications,” Somaini says. “Cloud service providers are going to have to comply and drive security into their solutions and offer that evidence. We’re getting there but we've got some ways to go.”
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in: