Wednesday, June 13, 2012

Making Hadoop safe for 'clusterophobics'

This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is senior analyst at Ovum.

By Tony Baer

Hadoop remains a difficult platform for most enterprises to master. For now skills are still hard to come by – both for data architect or engineer, and especially for data scientists. It still takes too much skill, tape, and baling wire to get a Hadoop cluster together. Not every enterprise is Google or Facebook, with armies of software engineers that they can throw at a problem. With some exceptions, most enterprises don’t deal with data on the scale of Google or Facebook either – but the bar is rising.

If 2011 was the year that the big IT data warehouse and analytic platform brand names discovered Hadoop, 2012 becomes the year where a tooling ecosystem starts emerging to make Hadoop more consumable for the enterprise. Let’s amend that – along with tools, Hadoop must also become a first-class citizen with enterprise IT infrastructure. Hadoop won’t cross over to the enterprise if it has to be treated as some special island. That means meshing with the practices and technology approaches that enterprises are using to manage their data centers or cloud deployments. Like SQL, data integration, virtualization, storage strategy, and so on.

Admittedly, much of this cuts against the grain of early Hadoop deployment that stressed open source and commodity infrastructure. Early adopters did so out of necessity as commercial software ran out of gas for Facebook when its data warehouse daily refreshes were breaking terabyte range, not to mention that the cost of commercial licenses for such scaled out analytic platforms wouldn’t have been trivial. Anyway, Hadoop’s linearity leverages scale out of commodity blades and direct attached disk as far as the eye can see, enabling such an almost pure noncommercial approach. At the time, Google’s, Yahoo’s, and Facebook’s issues were considered rather unique – most enterprise don’t run global search engines – not to mention that their business was built on armies of software engineers.

Something's got to give

As we’ve previously noted, something’s got to give on the skills front. Hadoop in the enterprise faces limits – the data problems are getting bigger and more complex for sure, but resources and skills are far more finite. So we envision tools and solutions addressing two areas:
  1. Products that address “clusterophobia” – organizations that seeks the scalable analytics of Hadoop but lack the appetite to erect infinite data centers out in the fields or hire the necessary skillsets. Obviously, using the cloud is one option – but the questions there revolve around whether corporate policies allow maintenance of data off premises, and also, as data store size grows, whether the cloud is still economical.
  2. The other side of the coin is consummability – tools that simplify access to and manipulation of the data.
In the run-up to this year’s Hadoop Summit, a number of tooling announcements addressing clusterophobia and consumption are pouring out.

The workloads are going to get more equitably distributed, and in the long run, we wouldn’t be surprised to see more Hadoop-only appliances.



On the fear of clusters side, players like Oracle, EMC Greenplum, and Teradata Aster are already offering appliances that simplify deployment of Hadoop, typically in conjunction with an Advanced SQL analytic platform. While most vendors position this as a way for Hadoop to “extend’ your data warehouse so you perform exploration in Hadoop, but the serious analytics in SQL, we view appliances as more than transitional strategy. The workloads are going to get more equitably distributed, and in the long run, we wouldn’t be surprised to see more Hadoop-only appliances, sort of like Oracle’s (for the record, they also bundle another NoSQL database).

Also addressing the same constituency are storage and virtualization – facts of life in the data center. For Hadoop to cross over to the enterprise, it, too, must get virtualization-friendly. Storage is an open question. The need for virtualization becomes even more apparent because (1) the exploratory nature of Hadoop analytics demands the ability to try out queries offline without having to disrupt or physically build a new cluster; and (2) the variable nature of Hadoop processing suggests that workloads are likely to be elastic. So we’ve been waiting for VMware to make their move. VMware – also part of EMC – has announced a pair of initiatives. First, they are working with the Apache Hadoop project to make the core pieces (HDFS and MapReduce) virtualization-aware, and separately, they are hosting their own open source project (Serengeti) for virtualizing Hadoop clusters. While Project Serengeti is not VM-specific, there’s little doubt that this will be a VMware project (we’d be shocked if the Xen folks were to buy in).

Storage follows

W
here there’s virtualized servers, storage often closely follows. A few months back, EMC dropped the other shoe, finally unveiling a strategy for leveraging Isilon with the Greenplum HD platform, the closest thing in NAS that replicates the scale-out model storage model popularized with Hadoop. This opens an argument of whether the scales of data in Hadoop make premium products such as Isilon unaffordable. The flip side however is the “open source tax,” where you hire the skills in your IT organization to manage and deploy scale-out storage, or pay consultants to do it for you.

In the spirit of making Hadoop more consummable, we expect a lot of vibes from new players that are simplifying navigation of Hadoop and building SQL bridges. Datameer is bringing down the pricing of its uber Hadoop spreadsheet to personal and workgroup levels courtesy of entry level pricing from $299 to $2999. Teradata Aster, which already offers a patented framework that translates SQL to MapReduce (there are also others out there) is now taking an early bet on the incubating Apache HCatalog metadata spec so that you could write SQL statements that go up against Hadoop. It joins approaches such as those from Hadapt, which hangs SQL tables from HDFS file nodes, and mainstream BI players such as Jaspersoft, that already provide translators that can grab reports directly from Hadoop.

In the spirit of making Hadoop more consummable, we expect a lot of vibes from new players that are simplifying navigation of Hadoop and building SQL bridges.



This doesn’t take away from the evolution of the Hadoop platform itself. Cloudera and Hortonworks are among those releasing new distributions that bundle their own mix of recent and current Apache Hadoop modules. While the Apache project has addressed the NameNode HA issue, it is still early in the game with bringing enterprise-grade manageability to MapReduce. That’s largely an academic issue as the bulk of enterprises have yet to implement Hadoop. By the time enterprises are ready, many of the core issues should resolve — although there will always be questions about the uptake of peripheral Hadoop projects.

What’s more important – and where the action will be – is in tools that allow enterprises to run and, more importantly, consume Hadoop. A chicken and egg situation, enterprises won’t implement before tools are available and vice versa.

This guest post comes courtesy of Tony Baer's
OnStrategies blog. Tony is senior analyst at Ovum.

You may also be interested in:

Tuesday, June 12, 2012

Cloud-powered services deliver new revenue and core business agility for SMB travel insurance provider Seven Corners

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

The latest BriefingsDirect cloud computing discussion centers on how small-to-medium sized business (SMB) Seven Corners, a travel insurance provider in Indiana, created and implemented an agile and revenue-generating approach to cloud services.

Seven Corners went beyond the typical efficiency and cost conservation benefits of cloud to build innovative business services that generate whole new revenue streams. A VMware-enabled cloud infrastructure allowed Seven Corners to rapidly reengineer its IT capabilities and spawn a new vision for its agility and future growth.

Here to share their story on an SMB's journey to cloud-based business development is George Reed, CIO of Seven Corners Inc., based in Carmel, Indiana. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: I often hear that culture will trump strategy. It sounds as if in your organization -- maybe because you're an SMB and you can get the full buy-in of your leadership -- you actually were able to make culture into the strategy?

Reed: Absolutely! By changing the culture and getting the departments out there to ask, "Is this stuff you're doing going to help me with this problem?" "Well, yes, it will," and then you deliver on that promise.

When you make a promise and you deliver on it, on or ahead of schedule and under budget people begin to believe, they're willing to participate and actively suggest other possible uses with technology that maybe you didn't think of. So you end up with a great technology-business relationship, which had the immediate result for the owners who were out looking to buy an insurance services application or rent one.

They said, "We're a very entrepreneurial company with so many different lines of business that there is nothing out there that would really work for us. We believe in your IT. Build us one." This year we rolled out an application called Access that is so configurable you could run any kind of insurance services through it, whether you're insuring parrots, cars, people, trucks, or whatever.

When you make a promise and you deliver on it, on or ahead of schedule and under budget people begin to believe.



Gardner: Let's learn some more about that. One of the nice things about early successes is that you get that buy-in and the cultural adoption, but you've also set expectations for ongoing success. I suppose it's important to keep the ball rolling and to show more demonstrable benefits.

So when it came to not only repaving those cow paths, making them more efficient, cutting cost, delivering that six-month return on investment, what did you enable? What did you then move forward to to actually create new business development and therefore new revenue?

Reed: By continuing to lower IT cost, when we virtualized the desktops using VMware's View, and then VMware's Horizon which makes it device-independent, it’s easier for everybody to work. That had appreciable productivity improvements out in the departments.

At the same time, my apps development group began designing and building an application called AXIS. What this came out of was that when we went to insurance conventions, talked to carriers and asked, "What are the top 10 reasons you want to fire your third-party administrator today?"

Technology was always part of those top 10 answers. So we devised and developed an application that would eliminate those as problems. The result is that this year, since February, we have four insurance carriers that were working with either their own stuff or third-party administrator, big COBOL mainframe monsters that are just so spaghetti-coded and heavy you can never really get out of it.

Already implemented

They see what our tool is doing and they ask these questions. "What are the specs for me to be able to connect to it?" "Well, you have to have an Internet connection and something smarter than a coffee cup." "That’s it?" "Yeah, that’s it." "Well, what’s the price for us to implement your solution?" "None. "It’s already implemented. You just import your business."

The jaws drop around the table. "How will I be able to see my data?" "You’ll all get in and look at it." "You mean I don’t ask for a report?" "You can, but it’s easier if you just log and look at your report."

They're flocking in. The biggest challenge is keeping up with the pace of the growing business and that goes back to planning for the future. I planned a storage solution and a compute solution. I can just keep adding blades and adding trays of storage without any outage at all.

Gardner: Pay as you go?

Reed: Yes, and the neat thing is that that the process of closing transactions will run about $7 million in revenue a year. It will cost about $1.5 million to service that revenue. Not a bad profit base for an SMB. And it’s because we're going to come in at 45 percent less than their existing service provider, and we're going to provide services that are 100 times better.

Gardner: Before we learn more about that approach and process, perhaps you could explain for our listener’s benefit what Seven Corners is, how large you are, what you do, and just describe what you are doing as a business.

Reed: Seven Corners started in 1993 as Specialty Risk International, and as we began to grow around the globe with customers in every time zone there is, the company changed its name to Seven Corners.

It started out providing specialty travel insurance, trip cancellation insurance, then began providing third-party administrator, general insurance services, and emergency assistance services around the globe. We have about 800 programs in five major product lines to span hundreds of thousands of members.

The company itself is about 170-175 people. We've been enjoying double-digit growth every year. As a matter of fact, I believe that at the end of February they hit the double-digit growth goal for 2012. So we're going to exceed that as the year goes on. You are going to see the technology has driven some of that growth.

Gardner: Who do you consider your primary customers? Is it travel agencies, or do you go direct to the travelers themselves, or a mixture?

If you communicate that in a passionate, effective way with the ownership and the executive group, you come out of the room with authority to move forward.



Reed: About 50 percent of the business is online. You go to the website to fill out a form to figure out what you need. You buy it right then and there, collect your virtual ID card, and you're on your way.

We have customers that are high-tech companies who are sending their people all over the world. They'll buy, at the corporate level, trip cancellation, trip assistance, and trip major medical insurance.

Then, there are universities and other affinity groups. They have students traveling abroad. We have companies sending people to work in the United States. Then, we are doing benefit management and travel assistance for numerous government agencies, US Department of State, Bureau of Prisons, AmeriCorps, and the Peace Corps as well.

Gardner: So if I understand correctly, George, you're saying that you went from being a broker of services, finding insurance carrier services, and then packaging and delivering them to end users, to now actually packaging insurance as a service. You're packaging the ability to conduct business online and packaging that, in addition, to the value-added services for insurance. Does that capture what’s happened?

With a solid, virtual, private-cloud solution, the cost of delivering technology services is just very low per-member serviced.



Reed: It does, and providing immediate access to what any stakeholder in that insurance lifecycle needs improves the quality of the end product. It lowers the cost of the healthcare.

We're starting to get into the state Medicaid benefits management as well. We're saying, "You're spending too much." The first slide in the proposal is always, "You're spending too much on Medicaid healthcare. We're going to help you cut it down and we are going to do it right now." You get attention, when you just walk in bold as brass and say that.

With a solid, virtual, private-cloud solution, the cost of delivering technology services is just very low per member service. In insurance, there are only so many ways to improve profit. One is to grow business. We all know that. But, two is to reduce the time and price of processing a claim, reduce the time and price to implement new business and collect the premium.

We’ve built an infrastructure and now an application platform that does those things. In the old system, the time to process a claim around here was about 30 minutes going through a complex travel medical claim with tons of lines. Now it’s about 15 seconds.

Gardner: This is really fascinating. It strikes me that you’ve sort of defined the future of business. Being an early adopter of technologies that make you agile and efficient means that you're not only passing along the ability to be productive in your traditional business, but you’ve moved into an adjacency that allows you to then take away from your partners and customers the processes that they can’t do as well and embed those into the services that you provide.

When, of course, you can charge back to them at a rate that was lower for them in the first place. You can really grow your definition of being a business within your market.

Think big

Reed: That’s correct, and you can do this in any industry. There is a talk that I’ve given a couple of times at Butler University about how you can never stop being small, until you think big. You have to say, "What would it take for me to do that? Everything is on the table. What would it take?"

My boss does that to me and my direct reports as well. "What would it take for us to accomplish this thing by this time? Don’t worry about what it is. Just tell me what it would take. Let’s see, if we can’t do it." That’s the philosophy that this company was built on.

By the end of the year, we're not only going to be doing all that kind of service for carriers, but we are going to stand up an instance of AXIS to be software as a service and every small third-party administrator (TPA) in the country is going to have an opportunity to buy seats at this servicing application that is easily configurable to whatever their business rules are.

Gardner: When you began this journey to transform how Seven Corners does IT, did you have a guiding principle or vision? Was there a stake in the ground that you could steer toward?

Reed: I did. I was brought in specifically to be an innovative change agent to take them from where they were to where they wanted to get as a business. They just weren’t there at the time. My vision was to come in, stop the bleeding, pick off the low hanging fruit to step up to the next level, and then build a strategic road map that would not only meet -- but exceed -- the needs of the business, and reach out 5-10 years beyond.

Gardner: Is there anything specifically about an SMB that you think enabled such agility? I know it’s very difficult in large companies to make such a change in short order. Do you have a certain advantage being smaller?


Authority to move

Reed: You do. If you're in a privately held SMB, your goal is to identify a problem or an opportunity, categorize what it would cost to resolve it or achieve it, and show the return on investment (ROI). If you communicate that in a passionate, effective way with the ownership and the executive group, you come out of the room with authority to move forward. That’s exactly what I did.

Gardner: And on one side of your business equation, of course, you have these consumers and customers, but you also must have quite a variety of partners, other insurance carriers, for example, medical insurance providers, and so forth. So you need to match and broker services among and between all these?

Multiple carriers

Reed: Correct. We have multiple carriers and do some of the advances around Seven Corners. We’ve got about four more carriers starting to move business our way. So you have to meet all of their needs, reporting needs, timeliness of service, and support their customers. At the same time, we've got all the individuals and groups that we're doing business with and we are doing it across five different revenue-producing lines of business.

Gardner: Let's move back to what it is that you've done at an architecture level. As you had that vision about what you needed, and as you gathered requirements in order to satisfy these business needs, what did you look for and what did you start to put in place?

Reed: The first thing I did is assess what was going on in the server room. On my first day, walking in there and looking around, I saw a bunch of oversized Dell desktops that were buffed up to be servers. There were about 140 of those in there.

I was thinking, "This is 2000-2003 technology. I'm here in 2010. This isn't going to work." It was an archaic system that was headed to failure, and that was one of the reasons they knew they had to change. They could no longer sustain either the applications or the hardware itself.

What I wanted to do was put in an infrastructure that would completely replace what was there. The company had grown to the point where there was so much transactional volume, so many thousands of people hitting the member portals. The cloud started to speak to me. I needed to be serving member portals out on a private cloud. I needed to be reaching out to the 15,000 medical providers around the world that we're talking with to get their claims without them sending paper or emails.

It was an archaic system that was headed to failure, and that was one of the reasons they knew they had to change.



I looked at an integrating partner locally in the Midwest. It's called Netech. I said, "Here is my problem. I know that within four months my major servers that are backing up or providing our insurance applications are going to fail. You can't even get parts on eBay for them anymore. I need you to come back to me in a week with a recommendation on how you understand my problem, what you recommend I do about it, and what it's going to cost, wheels-on, out the door."

Gardner: Just to be clear, did you have a certain level of virtualization already in place at this point?

Reed: No, there was nothing virtual in the building. It was all physical. Netech went away and came back a week later, after looking at the needs and asking a ton of questions, as any good partner would do. They said, "Here's what we think you need to do. You need something that's expandable easily for your compute side. We recommend Cisco UCS. Here is a plan for that.

"You need storage that can provide secure multitenancy, because you've got a lot of different carriers that don't want their information shared. They want to know that it's very secured. We recommend NetApp’s FlexPod solution for that.

"And for your virtualization, hub and going to the cloud, we're seeing the best results with VMware's product."

Then, we started with VMware Enterprise, and when it became available, upgraded to vSphere 5.0.

Up and running

They came in with a price, so I knew exactly what it would cost to implement, and they said, they could do it in three months. I went to the owners and said, "You're losing $100,000 revenue a month because of this situation in your server room. You'll pay for this entire project in six months." They said, "Well, get it done." And so we launched. In about two and a half months we were up and running. Our partnership with Netech has had a dramatic impact on speed-to-production for each phase of our virtualization.

Gardner: When you looked at creating a private-cloud fabric to support your application, were these including your internal back-office types of apps? Did you have ERP and communications infrastructure and apps that you needed to support? Clearly, you talked about portals and being able to create Web services and integrate across the business processes, all the above. Did you want to put everything in this cloud or did you segment?

Reed: I wanted to get off the old analog phone system that was there and go to a Cisco Unified Communications Manager, which is a perfect thing to drop into a virtual environment. I wanted to get everybody on the voice-over-IP (VOIP) phones. I wanted to get my call center truly managing 24×7×365, no matter where they were sitting.

I wanted to get users, both customer users, partner users and then the people from Seven Corners to get to where it didn't matter what they were connecting to the Internet with. They could connect to my system and see their data and it would never leave my server, which is one of the beauties of a private cloud, because the data never leaves a secure environment.

Gardner: Did you get a vision to bring all of your apps into this or did you want to segment, sort of was this a crawl-walk-run approach to bringing your apps into this cloud or was this more of a transformation, even shock therapy, to kind of do it all at once to get it done?

That got everybody thinking, "Hey, IT can deliver."



Reed: The server virtualization was a shock therapy, because the infrastructure was very outdated, and any piece of it failing is a failure. It doesn’t matter which one it was.

So we took a 144 servers virtual and took all the storage into the NetApp controller, achieving an immediate 50 percent de-duplication rate. And the efficiency in spinning up servers for a development group to support them, was such that we were cutting a ton of manpower that was required to spin those up. Instead of 4-5 days to set up a server for them to work on a new application, it's now 4-5 minutes.

In the first three days here, inside IT and out in the business, I said, "I need a list by Friday, please, of the top five things we need to keep doing, stop doing, or start doing."

I got great input and then I picked the pain points. That's what I call the low-hanging fruit. We knocked those out the first month, just general technology support. That got everybody thinking, "Hey, IT can deliver." Originally they had a nickname for the department --"The Island of Dr. No." ... No, we can't do this, no, we can't do that.

Getting champions

We said, "Let's find a way to say, "Yes," or at least offer a different solution." When we killed some of those early problems, we ended up getting champions out of opposition. It became very easy to get the company to do business differently, and to put up with the testing, user acceptance process, and training to use different technology services.

We're running on vSphere 5.0 and have put in a vCenter Configuration Manager and Operations Manager. We're doing our virtual desktops using the power of ThinApp and VMware Horizon.

Then to make the cloud come to being we got the vCloud Director and vShield in. We're doing a lot of business with the government, and with government agencies we have to be Federal Information Security Management Act (FISMA) compliant which makes HIPAA compliance look kind of easy.

The other technology that the VMware is living on is the Cisco UCS, and it’s all being stored on NetApp FlexPod with data replication. In a few months, it will be live mirror for both compute and the data.

Mobile devices

Gardner: And how does that now set you up for perhaps moving toward the use of mobile devices? Clearly, you've got some of those interface issues resolved by going fully virtual. Is there a path to allowing choice, even bring your own device (BYOD) types of choice by your users going to new classes of devices?

Reed: We’re working on the BYOD program now. A lot of the department heads have been issued devices through our secure wireless in the building. A couple of them have iPads and a couple of them have Android OSes. Several of us with the new Cisco phone systems have the Cisco tablet that's your actual VoIP phone station and your thin client.

To get ready to go to a meeting, I get off the phone, pull the tablet out of the docking station, and into the meeting. I have my desktop right there. I've never logged off. When I need to go home for the night, I take it home, and log in through my wireless. I have my Voice over IP handset, and I'm calling from my desk phone from anywhere in the world.

So we're already doing what I would call a pilot program to prove it out to everybody and get them used to it. Right now, our sales guys love the fact that they just pull that thing out of the docking station and go off to show a client what our software and services really are.

Gardner: That's a really impressive story, George, and you've been able to do this in just a couple of years. It’s really astonishing. Before we close out, could you provide some advice to other SMBs that have heard your story and can see the light bulbs for their own benefits going off in their heads? Do you have any advice in hindsight from your experience that you would share with them in terms of getting started?

Reed: The key is that you can’t get to where you’re going if you don’t set the vision of what you want to be able to do. To do that, you have to assess where you’re at and what the problems are.

Phase your solutions that you’re going to recommend to solve big problems early and get buy-in. And when you’ve got executive buy-in, you have department heads and users buying in, it’s easy to get a lot of stuff done very quickly, because people aren’t resisting the change.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Thursday, June 7, 2012

Cloud Cruiser announces availability of Cloud Cost Intelligence solution for HP CloudSystem at HP Discover 2012 Conference

Cloud Cruiser announced at HP Discover in Las Vegas this week the general release of two new cloud cost intelligence solutions for HP CloudSystem.

The new software products integrate Cloud Cruiser’s cost analytics platform with HP CloudSystem Matrix, CloudSystem Enterprise, and Cloud Service Automation to provide cost transparency, chargeback, and business intelligence (BI) analytics for provisioned resources.

The integration between the cost analytics platform and Cloud Service Automation versions 2.01 and 3.0, and CloudSystem Matrix versions 6.3, 7.0, and 7.1, delivers cost intelligence to customers based on granular, enterprise-wide resource usage and spending. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Customers can perform cost analysis, implement chargeback, set budgets and alerts, generate invoices and optimize their costs.



By leveraging a centralized repository of all enterprise IT spending, customers can perform cost analysis, implement chargeback, set budgets and alerts, generate invoices and optimize their costs. Both Cloud Cruiser, an HP AllianceONE partner, and HP are conducting live demonstrations of the cost analytics platform this week at the HP Discover 2012 Conference in the Cloud Cruiser booth and the HP Cloud Zone.

The Cost Intelligence Platform is available for purchase directly from the company or through HP software partners, Seamless Technologies, and Pepperweed Consulting. Product information and pricing is available at www.cloudcruiser.com.

You may also be interested in:

Wednesday, June 6, 2012

Data explosion and big data demand new strategies for data management, backup and recovery, say experts

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: Quest Software.


Businesses clearly need a better approach to their data recovery capabilities -- across both their physical and virtualized environments. The current landscape for data management, backup, and disaster recovery (DR), too often ignores the transition from physical to virtualized environments, and sidesteps the heightened real-time role that data now plays in enterprise.

What's more, major trends like virtualization, big data, and calls for comprehensive and automated data management are also driving this call for change.

What's needed are next-generation, integrated, and simplified approaches, the fast backup and recovery that spans all essential corporate data. The solution therefore means bridging legacy and new data, scaling to handle big data, implementing automation and governance, and integrating the functions of backup protection and DR.

To share insights into why data recovery needs a new approach and how that can be accomplished, the next BriefingsDirect discussion joins two experts, John Maxwell, Vice President of Product Management for Data Protection at Quest Software, and Jerome Wendt, President and Lead Analyst of DCIG, an independent storage analyst and consulting firm. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Is data really a different thing than, say, five years ago in terms of how companies view it and value it?

Wendt: Absolutely. There's no doubt that companies are viewing it much more holistically. It used to be just data that was primarily in structured databases, or even semi-structured format, such as email, was where all the focus was. Clearly, in the last few years, we've seen a huge change, where unstructured data now is the fastest growing part of most enterprises and where even a lot of their intellectual property is stored. So I think there is a huge push to protect and mine that data.

But we're also just seeing more of a push to get to edge devices. We talk a lot about PCs and laptops, and there is more of a push to protect data in that area, but all you have to do is look around and see the growth.

When you go to any tech conference, you see iPads everywhere, and people are storing more data in the cloud. That's going to have an impact on how people and organizations manage their data and what they do with it going forward.

Gardner: Now, for more and more companies, data is the business, or at least the analytics that they derive from it.

Mission critical

Maxwell: It’s funny that you mention that, because I've been in the storage business for over 15 years. I remember just 10 years ago, when studies would ask people what percentage of their data was mission critical, it was maybe around 10 percent. That aligns with what you're talking about, the shift and the importance of data.

Recent surveys from multiple analyst groups have now shown that people categorize their mission-critical data at 50 percent. That's pretty profound, in that a company is saying half the data that we have, we can't live without, and if we did lose it, we need it back in less than an hour, or maybe in minutes or seconds.

Gardner: So how is the shift and the change in infrastructure impacting this simultaneous need for access and criticality?

Maxwell: Well, the biggest change from an infrastructure standpoint has been the impact of virtualization. This year, well over 50 percent of all the server images in the world are virtualized images, which is just phenomenal.

Quest has really been in the forefront of this shift in infrastructure. We have been, for example, backing up virtual machines (VMs) for seven years with our Quest vRanger product. We've seen that evolve from when VMs or virtual infrastructure were used more for test and development. Today, I've seen studies that show that the shops that are virtualized are running SQL Server, Microsoft Exchange, very mission-critical apps.

We have some customers at Quest that are 100 percent virtualized. These are large organizations, not just some mom and pop company. That shift to virtualization has really made companies assess how they manage it, what tools they use, and their approaches. Virtualization has a large impact on storage and how you backup, protect, and restore data.

Once you implement and have the proper tools in place, your virtual life is going to be a lot easier than your physical one from an IT infrastructure perspective. A lot of people initially moved to virtualization as a cost savings, because they had under-utilization of hardware. But one of the benefits of virtualization is the freedom, the dynamics. You can create a new VM in seconds. But then, of course, that creates things like VM sprawl, the amount of data continues to grow, and the like.

At Quest we've adapted and exploited a lot of the features that exist in virtual environments, but don't exist in physical environments. It’s actually easier to protect and recover virtual environments than it is physical, if you have tools that are exploiting the APIs and the infrastructure that exists in that virtual environment.

Significant benefits

Wendt: We talk a lot these days about having different silos of data. One application creates data that stays over here. Then, it's backed up separately. Then, another application or another group creates data back over here.

Virtualization not only means consolidation and cost savings, but it also facilitates a more holistic view into the environment and how data is managed. Organizations are finally able to get their arms around the data that they have.
Before, it was so distributed that they didn't really have a good sense of where it resided or how to even make sense of it. With virtualization, there are initial cost benefits that help bring it altogether, but once it's altogether, they're able to go to the next stage, and it becomes the business enabler at that point.

Gardner: The key now is to be able to manage, automate, and bring the comprehensive control and governance to this equation, not just the virtualized workloads, but also of course the data that they're creating and bringing back into business processes.
Once it's altogether, they're able to go to the next stage, and it becomes the business enabler at that point.


How do we move from sprawl to control and make this flip from being a complexity issue to a virtuous adoption and benefits issue?

Maxwell: Over the years, people had very manual processes. For example, when you brought a new application online or added hardware, server, and that type of thing, you asked, "Oops, did we back it up? Are we backing that up?"

One thing that’s interesting in a virtual environment is that the backup software we have at Quest will automatically see when a new VM is created and start backing it up. So it doesn't matter if you have 20 or 200 or 2,000 VMs. We're going to make sure they're protected.

Where it really gets interesting is that you can protect the data a lot smarter than you can in a physical environment. I'll give you an example.

In a VMware environment, there are services that we can use to do a snapshot backup of a VM. In essence, it’s an immediate backup of all the data associated with that machine or those machines. It could be on any generic kind of hardware. You don’t need to have proprietary hardware or more expensive software features of high-end disk arrays. That is a feature that we can exploit built within the hypervisor itself.

Image backup


E
ven the way that we move data is much more efficient, because we have a process that we pioneered at Quest called "backup once, restore many," where we create what's called image backup. From that image backup I can restore an entire system, individual file, or an application. But I've done that from that one path, that one very effective snapshot-based backup.

If you look at physical environments, there is the concept of doing physical machine backups and file level backups, specific application backups, and for some systems, you even have to employ a hardware-based snapshots, or you actually had to bring the applications down.

So from that perspective, we've gotten much more sophisticated in virtual environments. Again, we're moving data by not impacting the applications themselves and not impacting the VMs. The way we move data is very fast and is very effective.

Wendt:
One of the things we are really seeing is just a lot more intelligence going into this backup software. They're moving well beyond just “doing backups” any more. There's much more awareness of what data is included in these data repositories and how they're searched.
There's much more awareness of what data is included in these data repositories and how they're searched.


And also with more integration with platforms like VMware vSphere Operations, administrators can centrally manage backups, monitor backup jobs, and do recoveries. One person can do so much more than they could even a few years ago.

And really the expectation of organizations is evolving that they don’t want to necessarily want separate backup admin and system admin anymore. They want one team that manages their virtual infrastructure. That all kind of rolls up to your point where it makes it easy to govern, manage, and execute on corporate objectives.

Gardner: Is this really a case, John Maxwell, where we are getting more and paying less?

Maxwell: Absolutely. Just as the cost per gigabyte has gone down over the past decade, the effectiveness of the software and what it can do is way beyond what we had 10 years ago.

Simplified process

Today, in a virtual environment, we can provide a solution that simplifies the process, where one person can ensure that hundreds of VMs are protected. They can literally right-click and restore a VM, a file, a directory, or an application.

One of the focuses we have had at Quest, as I alluded earlier, is that there are a lot of mission-critical apps running on these machines. Jerome talked about email. A lot of people consider email one of their most mission-critical applications. And the person responsible for protecting the environment that Microsoft Exchange is running on, may not be an Exchange administrator, but maybe they're tasked with being able to recover Exchange.

That’s why we've developed technologies that allow you to go out there, and from that one image backup, restore an email conversation or an attachment email from someone’s mailbox. That person doesn’t have to be a guru with Exchange. Our job is to, behind the scenes, figure how to do this and make this available via a couple of mouse-clicks.

Wendt: As John was speaking, I was going to comment. I spoke to a Quest customer just a few weeks ago. He clearly had some very specific technical skills, but he's responsible for a lot of things, a lot of different functions -- server admin, storage admin, backup admin.
You have to try to juggle everything, while you're trying to do your job, with backup just being one of those tasks.


I think a lot of individuals can relate to this guy. I know I certainly did, because that was my role for many years, when I was an administrator in the police department. You have to try to juggle everything, while you're trying to do your job, with backup just being one of those tasks.

In his particular case, he was called upon to do a recovery, and, to John’s point, it was an Exchange recovery. He never had any special training in Exchange recovery, but it just happened that he had Quest Software in place. He was able to use its FastRecover product to recover his Microsoft Exchange Server and had it back up and going in a few hours.

What was really amazing, in this particular case, is that he was traveling at the time it happened. So he had to talk to his manager through the process, and was able to get it up and going. Once he had the system up, he was able to log on and get it going fairly quickly.

That just illustrates how much the world has changed and how much backup software and these products have evolved to the point where you need to understand your environment, probably more than you need to understand the product, and just find the right product for your environment. In this case, this individual clearly accomplished that.

Gardner: How do organizations approach this being in a hybrid sort of a model, between physical and virtual, and recognizing that different apps have different criticality for their data, and that might change?

Maxwell: Well, there are two points. One, we can't have a bunch of niche tools, one for virtual, one for physical, and the like. That's why, with our vRanger product, which has been the market leader in virtual data protection for the past seven years, we're coming out with physical support in that product in the fall of 2012. Those customers are saying, "I want one product that handles that non-virtualized data."

The second part gets down to what percentage of your data is mission-critical and how complex it is, meaning is it email, or a database, or just a flat file, and then asking if these different types of data have specific service-level agreements (SLAs), and if you have products that can deliver on those SLAs.

That's why at Quest, we're really promoting a holistic approach to data protection that spans replication, continuous data protection, and more traditional backup, but backup mainly based on snapshots.

Then, that can map to the service level, to your business requirements. I just saw some data from an industry analyst that showed the replication software market is basically the same size now as the backup software market. That shows the desire for people to have kind of that real-time failover for some application, and you get that with replication.
We can't have a bunch of niche tools, one for virtual, one for physical, and the like.


When it comes to the example that Jerome gave with that customer, the Quest product that we're using is NetVault FastRecover, which is a continuous data protection product. It backs up everything in real-time. So you can go back to any point in time.

It’s almost like a time machine, when it comes to putting back that mailbox, the SQL database, or Oracle database. Yet, it's masking a lot of the complexity. So the person restoring it may not be a DBA. They're going to be that jack of all trades who's responsible for the storage and maybe backup overall.
Gardner: John, in talking with Quest folks, I've heard them refer to a next-generation platform or approach, or a whole greater than the sum of the parts. How do you define next generation when it comes to data recovery in your view of the world?

New benefits

Maxwell: Well, without hyperbole, for us, our next generation is a new platform that we call NetVault Extended Architecture (XA), and this is a way to provide several benefits to our customers.

One is that with NetVault Extended Architecture we now are delivering a single user experience across products. So this gets into SMB-versus-enterprise for a customer that’s using maybe one of our point solutions for application or database recovery, providing that consistent look and feel, consistent approach. We have some customers that use multiple products. So with this, they now have a single pane of glass.

Also, it's important to offer a consistent means for administering and managing the backup and recovery process, because as we've been talking, why should a person have to have multiple skill sets? If you have one view, one console into data protection, that’s going to make your life a lot easier than have to learn a bunch of other types of solutions.

That’s the immediate benefit that I think people see. What NetVault Extended Architecture encompasses under the covers, though, is a really different approach in the industry, which is modularization of a lot of the components to backup and recovery and making them plug and play.

Let me give you an example. With the increase in virtualization a lot of people just equate virtualization with VMware. Well, we've got Hyper-V. We have initiatives from Red Hat. We have Xen, Oracle, and others. Jerome, I'm kind of curious about your views, but just as we saw in the 90s and in the 00s, with people having multiple platforms, whether it's Windows and Linux or Windows and Linux and, as you said, AIX, I believe we are going to start seeing multiple hypervisors.
It's important to offer a consistent means for administering and managing the backup and recovery process


So one of the approaches that NetVault Extended Architecture is going to bring us is a capability to offer a consistent approach to multiple hypervisors, meaning it could be a combination of VMware and Microsoft Hyper-V and maybe even KVM from Red Hat.

But, again, the administrator, the person who is managing the backup and recovery, doesn’t have to know any one of those platforms. That’s all hidden from them. In fact, if they want to restore data from one of those hypervisors, say restore a VMware as VMDK, which is their volume in VMware speak, into what's called a VHD and a Hyper-V, they could do that.

That, to me, is really exciting, because this is exploiting these new platforms and environments and providing tools that simplify the process. But that’s going to be one of the many benefits of our new NetVault Extended Architecture next generation, where we can provide that singular experience for our customer base to have a faster go-to-market, faster time to market, with new solutions, and be able to deliver in a modular approach.

Customers can choose what they need, whether they're an SMB customer, or one of the largest customers that we have with hundreds of petabytes or exabytes of data.

Wendt: DCIG has a lot of conversations with managed-service providers, and you'd be surprised, but there are actually very few that are VMware shops. I find the vast majority are actually either Microsoft Hyper-V or using Red Hat Linux as their platform, because they're looking for a cost-effective way to deliver virtualization in their environments.

We've seen this huge growth in replication, and people want to implement disaster recovery plans or business continuity planning. I think this ability to recover across different hypervisors is going to become absolutely critical, maybe not today or tomorrow, but I would say in the new few years. People are going to say, "Okay, now that we've got our environment virtualized, we can recover locally, but how about recovering into the cloud or with a cloud service provider? What options do we have there?"

More choice

If they're using VMware and their provider isn’t, they're almost forced to use VMware or something like this, whereas your platform gives them much more choice for managed service providers that are using platforms other than VMware. It sounds like Quest will really give them the ability to backup VMware hypervisors and then potentially recover into Red Hat or Microsoft Hyper-V at MSPs. So that could be a really exciting development for Quest in that area.

Gardner: Jerome, do you have any use cases or examples that you're familiar with that illustrate this concept of next-generation and lifecycle approach to data recovery that we have been discussing?

Wendt: Well, it’s not an example, just a general trend I am seeing in products, because most of DCIG’s focus is just on analyzing the products themselves and comparing, traversing, and identifying general broader trends within those products.
Going forward, the industry is probably going to have to find a better way to refer to these products. Quest is a whole lot more than just running a backup.


There are two things we're seeing. One, we're struggling calling backup software backup software anymore, because it does so much more than that. You mentioned earlier about so much more intelligence in these products. We call it backup software, because that’s the context in which everyone understands it, but I think going forward, the industry is probably going to have to find a better way to refer to these products. Quest is a whole lot more than just running a backup.

And then second, people, as they view backup and how they manage their infrastructure, really have to go from this reactive, "Okay, today I am going to have to troubleshoot 15 backup jobs that failed overnight." Those days are over. And if they're not over, you need to be looking for new products that will get you over that hump, because you should no longer be troubleshooting failed backup jobs.

You should be really looking more toward, how you can make sure all your environment is protected, recoverable, and really moving to the next phase of doing disaster recoveries and business continuity planning. The products are there. They are mature and people should be moving down that path.

Crystal ball

Gardner: John, when we look into the crystal ball, even not that far out, it just seems that in order to manage what you need to do as a business, getting good control over your data, being able to ensure that it’s going to be available anytime, anywhere, regardless of the circumstances is, again, not a luxury, it’s not a nice to have. It’s really just going to support the viability of the business.

Maxwell: Absolutely. And what’s going to make it even more complex is going to be the cloud, because what's your control, as a business, over data that is hosted some place else?

I know that at Quest we use seven SaaS-based applications from various vendors, but what’s our guarantee that our data is protected there? I can tell you that a lot of these SaaS-based companies or hosting companies may offer an environment that says, "We're always up," or "We have a higher level of availability," but most recovery is based on logical corruption of data.

As I said, with some of these smaller vendors, you wonder about what if they went out of business, because I have heard stories of small service providers closing the doors, and you say, "But my data is there."

So the cloud is really exciting, in that we're looking at how we're going to protect assets that may be off-premise to your environment and how we can ensure that you can recover that data, in case that provider is not available.

Then there's something that Jerome touched upon, which is that the cloud is going to offer so many opportunities, the one that I am most excited about is using the cloud for failover. That really getting beyond recovery into business continuity.
Not only can we recover your data within seconds, but we can get your business back up and running, from an IT perspective, faster than you probably ever presumed that you could.


And something that has only been afforded by the largest enterprises, Global 1000 type customers, is the ability to have a stand up center, a SunGard or someone like that, which is very costly and not within reach of most customers. But with virtualization and with the cloud, there's a concept that I think we're going to see become very mainstream over the next five years, which is failover recovery to the cloud. That's something that’s going to be within reach of even SMB customers, and that’s really more of a business continuity message.

So now we're stepping up even more. We're now saying, "Not only can we recover your data within seconds, but we can get your business back up and running, from an IT perspective, faster than you probably ever presumed that you could."
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: Quest Software.
You may also be interested in:

Tuesday, June 5, 2012

Corporate data, supply chains remain vulnerable to cyber crime attacks, says Open Group conference speaker

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on how security impacts the enterprise architecture, enterprise transformation, and global supply chain activities in organizations, both large and small.

We're now joined on the security front with one of the main speakers at the conference, Joel Brenner, the author of "America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare."

Joel is a former Senior Counsel at the National Security Agency (NSA), where he advised on legal and policy issues relating to network security. Mr. Brenner currently practices law in Washington at Cooley LLP, specializing in cyber security. Registration remains open for The Open Group Conference in Washington, DC beginning July 16.

Previously, he served as the National Counterintelligence Executive in the Office of the Director of National Intelligence, and as the NSA’s Inspector General. He is a graduate of University of Wisconsin–Madison, the London School of Economics, and Harvard Law School. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Your book came out last September and it affirmed this notion that the United States, or at least open Western cultures and societies, are particularly vulnerable to being infiltrated, if you will, from cybercrime, espionage, and dirty corporate tricks.

Why are we particularly vulnerable, when we should be most adept at using cyber activities to our advantage?

Brenner: Let’s make a distinction here between the political-military espionage that's gone on since pre-biblical times and the economic espionage that’s going on now and, in many cases, has nothing at all to do with military, defense, or political issues.

The other stuff has been going on forever, but what we've seen in the last 15 or so years is a relentless espionage attack on private companies for reasons having nothing to do with political-military affairs or defense.

So the countries that are adept at cyber, but whose economies are relatively undeveloped compared to ours, are at a big advantage, because they're not very lucrative targets for this kind of thing, and we are. Russia, for example, is paradoxical. While it has one of the most educated populations in the world and is deeply cultured, it has never been able to produce a commercially viable computer chip.

Not entrepreneurial


We’re not going to Russia to steal advanced technology. We’re not going to China to steal advanced technology. They're good at engineering and they’re good at production, but so far, they have not been good at making themselves into an entrepreneurial culture.

That’s one just very cynical reason why we don't do economic espionage against the people who are mainly attacking us, which are China, Russia, and Iran. I say attack in the espionage sense.

The other reason is that you're stealing intellectual property when you’re doing economic espionage. It’s a bedrock proposition of American economics and political strategy around the world to defend the legal regime that protects intellectual property. So we don’t do that kind of espionage. Political-military stuff we're real good at.

Gardner: Wouldn’t our defense rise to the occasion? Why hasn't it?

Brenner: The answer has a lot to do with the nature of the Internet and its history. The Internet, as some of your listeners will know, was developed starting in the late '60s by the predecessor of the Defense Advanced Research Projects Agency (DARPA), a brilliant operation which produced a lot of cool science over the years.
The people who invented this, if you talk to them today, lament the fact that they didn't build a security layer into it.


It was developed for a very limited purpose, to allow the collaboration of geographically dispersed scientists who worked under contract in various universities with the Defense Department's own scientists. It was bringing dispersed brainpower to bear.

It was a brilliant idea, and the people who invented this, if you talk to them today, lament the fact that they didn't build a security layer into it. They thought about it. But it wasn't going to be used for anything else but this limited purpose in a trusted environment, so why go to the expense and aggravation of building a lot of security into it?

Until 1992, it was against the law to use the Internet for commercial purposes. Dana, this is just amazing to realize. That’s 20 years ago, a twinkling of an eye in the history of a country’s commerce. That means that 20 years ago, nobody was doing anything commercial on the Internet. Ten years ago, what were you doing on the Internet, Dana? Buying a book for the first time or something like that? That’s what I was doing, and a newspaper.

In the intervening decade, we’ve turned this sort of Swiss cheese, cool network, which has brought us dramatic productivity and all and pleasure into the backbone of virtually everything we do.

International finance, personal finance, command and control of military, manufacturing controls, the controls in our critical infrastructure, all of our communications, virtually all of our activities are either on the Internet or exposed to the Internet. And it’s the same Internet that was Swiss cheese 20 years ago and it's Swiss cheese now. It’s easy to spoof identities on it.

So this gives a natural and profound advantage to attack on this network over defense. That’s why we’re in the predicament we're in.

Both directions


Gardner: Let’s also look at this notion of supply chain, because corporations aren’t just islands unto themselves. A business is really a compendium of other businesses, products, services, best practices, methodologies, and intellectual property that come together to create a value add of some kind. It's not just attacking the end point, where that value is extended into the market. It’s perhaps attacking anywhere along that value chain.

What are the implications for this notion of the ecosystem vulnerability versus the enterprise vulnerability?

Brenner: Well, the supply chain problem really is rather daunting for many businesses, because supply chains are global now, and it means that the elements of finished products have a tremendous numbers of elements. For example, this software, where was it written? Maybe it was written in Russia -- or maybe somewhere in Ohio or in Nevada, but by whom? We don’t know.

There are two fundamental different issues for supply chain, depending on the company. One is counterfeiting. That’s a bad problem. Somebody is trying to substitute shoddy goods under your name or the name of somebody that you thought you could trust. That degrades performance and presents real serious liability problems as a result.
The supply chain problem really is rather daunting for many businesses, because supply chains are global now, and it means that the elements of finished products have a tremendous numbers of elements.


The other problem is the intentional hooking, or compromising, of software or chips to do things that they're not meant to do, such as allow backdoors and so on in systems, so that they can be attacked later. That’s a big problem for military and for the intelligence services all around the world.

The reason we have the problem is that nobody knows how to vet a computer chip or software to see that it won't do these squirrelly things. We can test that stuff to make sure it will do what it's supposed to do, but nobody knows how to test the computer chip or two million lines of software reliably to be sure that it won’t also do certain things we don't want it to do.

You can put it in a sandbox or a virtual environment and you can test it for a lot of things, but you can't test it for everything. It’s just impossible. In hardware and software, it is the strategic supply chain problem now. That's why we have it.

If you have a worldwide supply chain, you have to have a worldwide supply chain management system. This is hard and it means getting very specific. It includes not only managing a production process, but also the shipment process. A lot of squirrelly things happen on loading docks, and you have to have a way not to bring perfect security to that -- that's impossible -- but to make it really harder to attack your supply chain.

Notion of cost

Gardner: So many organizations today, given the economy and the lagging growth, have looked to lowest cost procedures, processes, suppliers, materials, and aren't factoring in the risk and the associated cost around these security issues. Do people need to reevaluate cost in the supply chain by factoring in what the true risks are that we’re discussing?

Brenner: Yes, but of course, when the CEO and the CFO get together and start to figure this stuff out, they look at the return on investment (ROI) of additional security. It's very hard to be quantitatively persuasive about that. That's one reason why you may see some kinds of production coming back into the United States. How one evaluates that risk depends on the business you're in and how much risk you can tolerate.

This is a problem not just for really sensitive hardware and software, special kinds of operations, or sensitive activities, but also for garden-variety things.
This is a problem not just for really sensitive hardware and software, special kinds of operations, or sensitive activities, but also for garden-variety things.


Gardner: We’ve seen other aspects of commerce in which we can't lock down the process. We can’t know all the information, but what we can do is offer deterrence, perhaps in the form of legal recourse, if something goes wrong, if in fact, decisions were made that countered the contracts or were against certain laws or trade practices.

Brenner: For a couple of years now, I’ve struggled with the question why it is that liability hasn’t played a bigger role in bringing more cyber security to our environment, and there are a number of reasons.

We've created liability for the loss of personal information, so you can quantify that risk. You have a statute that says there's a minimum damage of $500 or $1,000 per person whose identifiable information you lose. You add up the number of files in the breach and how much the lawyers and the forensic guys cost and you come up with a calculation of what these things cost.

But when it comes to just business risk, not legal risk, and the law says intellectual property to a company that depends on that intellectual property, you have a business risk. You don’t have much of a legal risk at this point.

You may have a shareholder suit issue, but there hasn’t been an awful lot of that kind of litigation so far. So I don't know. I'm not sure that’s quite the question you were asking me, Dana.

Gardner: My follow on to that was going to be where would you go to sue across borders anyway? Is there an über-regulatory or legal structure across borders to target things like supply chain, counterfeit, cyber espionage, or mistreatment of business practice?

Depends on the borders


Brenner: It depends on the borders you're talking about. The Europeans have a highly developed legal and liability system. You can bring actions in European courts. So it depends what borders you mean.

If you’re talking about the border of Russia, you have very different legal issues. China has different legal issues, different from Russia, as well from Iran. There are an increasing number of cases where actions are being brought in China successfully for breaches of intellectual property rights. But you wouldn't say that was the case in Nigeria. You wouldn't say that was the case in a number of other countries where we’ve had a lot of cybercrime originating from.

So there's no one solution here. You have to think in terms of all kinds of layered defenses. There are legal actions you can take sometimes, but the fundamental problem we’re dealing with is this inherently porous Swiss-cheesy system. In the long run, we're going to have to begin thinking about the gradual reengineering of the way the Internet works, or else this basic dynamic, in which lawbreakers have advantage over law-abiding people, is not going to go away.

Think about what’s happened in cyber defenses over the last 10 years and how little they've evolved -- even 20 years for that matter. They almost all require us to know the attack mode or the sequence of code in order to catch it. And we get better at that, but that’s a leapfrog business. That’s fundamentally the way we do it.

Whether we do it at the perimeter, inside, or even outside before the attack gets to the perimeter, that’s what we’re looking for -- stuff we've already seen. That’s a very poor strategy for doing security, but that's where we are. It hasn’t changed much in quite a long time and it's probably not going to.
We’re talking about the Balkanization of the Internet. I think that's going to happen as more companies demand a higher level of protection.


Gardner: Why is that the case? Is this not a perfect opportunity for a business-government partnership to come together and re-architect the Internet at least for certain types of business activities, permit a two-tier approach, and add different levels of security into that? Why hasn’t it gone anywhere?

Brenner: What I think you’re saying is different tiers or segments. We’re talking about the Balkanization of the Internet. I think that's going to happen as more companies demand a higher level of protection, but this again is a cost-benefit analysis. You’re going to see even more Balkanization of the Internet as you see countries like Russia and China, with some success, imposing more controls over what can be said and done on the Internet. That’s not going to be acceptable to us.

Gardner: We’ve seen a lot with cloud computing and more businesses starting to go to third-party cloud providers for their applications, services, data storage, even integration to other business services and so forth.

More secure

If there's a limited lumber, or at least a finite number, of cloud providers and they can institute the proper security and take advantage of certain networks within networks, then wouldn’t that hypothetically make a cloud approach more secure and more managed than every-man-for-himself, which is what we have now in enterprises and small to medium-sized businesses (SMBs)?

Brenner: I think the short answer is, yes. The SMBs will achieve greater security by basically contracting it out to what are called cloud providers. That’s because managing the patching of vulnerabilities and other aspects and encryption is beyond what’s most small businesses and many medium-sized businesses can do, are willing to do, or can do cost-effectively.

For big businesses in the cloud, it just depends on how good the big businesses’ own management of IT is as to whether it’s an improvement or not. But there are some problems with the cloud.

People talk about security, but there are different aspects of it. You and I have been talking just now about security meaning the ability to prevent somebody from stealing or corrupting your information. But availability is another aspect of security. By definition, putting everything in one remote place reduces robustness, because if you lose that connection, you lose everything.

Consequently, it seems to me that backup issues are really critical for people who are going to the cloud. Are you going to rely on your cloud provider to provide the backup? Are you going to rely on the cloud provider to provide all of your backup? Are you going to go to a second cloud provider? Are you going to keep some information copied in-house?
By definition, putting everything in one remote place reduces robustness, because if you lose that connection, you lose everything.


What would happen if your information is good, but you can’t get to it? That means you can’t get to anything anymore. So that's another aspect of security people need to think through.

Gardner: How do you know you’re doing the right thing? How do you know that you're protecting? How do you know that you've gone far enough to ameliorate the risk?

Brenner: This is really hard. If somebody steals your car tonight, Dana, you go out to the curb or the garage in the morning, and you know it's not there. You know it’s been stolen.

When somebody steals your algorithms, your formulas, or your secret processes, you've still got them. You don’t know they’re gone, until three or four years later, when somebody in Central China or Siberia is opening a factory and selling stuff into your market that you thought you were going to be selling -- and that’s your stuff. Then maybe you go back and realize, "Oh, that incident three or four years ago, maybe that's when that happened, maybe that’s when I lost it."

What's going out

S
o you don’t even know necessarily when things have been stolen. Most companies don’t do a good job. They’re so busy trying to find out what’s coming into their network, they're not looking at what's going out.

That's one reason the stuff is hard to measure. Another is that ROI is very tough. On the other hand, there are lots of things where business people have to make important judgments in the face of risks and opportunities they can't quantify, but we do it.

We’re right to want data whenever we can get it, because data generally means we can make better decisions. But we make decisions about investment in R&D all the time without knowing what the ROI is going to be and we certainly don't know what the return on a particular R&D expenditure is going to be. But we make that, because people are convinced that if they don't make it, they’ll fall behind and they'll be selling yesterday’s products tomorrow.

Why is it that we have a bias toward that kind of risk, when it comes to opportunity, but not when it comes to defense? I think we need to be candid about our own biases in that regard, but I don't have a satisfactory answer to your question, and nobody else does either. This is one where we can't quantify that answer.

Gardner: It sounds as if people need to have a healthy dose of paranoia to tide them over across these areas. Is that a fair assessment?
People need to understand, without actually being paranoid, that life is not always what it seems. There are people who are trying to steal things from us all the time, and we need to protect ourselves.


Brenner: Well, let’s say skepticism. People need to understand, without actually being paranoid, that life is not always what it seems. There are people who are trying to steal things from us all the time, and we need to protect ourselves.

In many companies, you don't see a willingness to do that, but that varies a great deal from company to company. Things are not always what they seem. That is not how we Americans approach life. We are trusting folks, which is why this is a great country to do business in and live in. But we're having our pockets picked and it's time we understood that.

Gardner: And, as we pointed out earlier, this picking of pockets is not just on our block, but could be any of our suppliers, partners, or other players in our ecosystem. If their pockets get picked, it ends up being our problem too.

Brenner: Yeah, I described this risk in my book, “America the Vulnerable,” at great length and in my practice, here at Cooley, I deal with this every day. I find myself, Dana, giving briefings to businesspeople that 5, 10, or 20 years ago, you wouldn’t have given to anybody who wasn't a diplomat or a military person going outside the country. Now this kind of cyber pilferage is an aspect of daily commercial life, I'm sorry to say.
Register for The Open Group Conference
July 16-18 in Washington, D.C.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in: