Thursday, June 7, 2012

Cloud Cruiser announces availability of Cloud Cost Intelligence solution for HP CloudSystem at HP Discover 2012 Conference

Cloud Cruiser announced at HP Discover in Las Vegas this week the general release of two new cloud cost intelligence solutions for HP CloudSystem.

The new software products integrate Cloud Cruiser’s cost analytics platform with HP CloudSystem Matrix, CloudSystem Enterprise, and Cloud Service Automation to provide cost transparency, chargeback, and business intelligence (BI) analytics for provisioned resources.

The integration between the cost analytics platform and Cloud Service Automation versions 2.01 and 3.0, and CloudSystem Matrix versions 6.3, 7.0, and 7.1, delivers cost intelligence to customers based on granular, enterprise-wide resource usage and spending. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Customers can perform cost analysis, implement chargeback, set budgets and alerts, generate invoices and optimize their costs.



By leveraging a centralized repository of all enterprise IT spending, customers can perform cost analysis, implement chargeback, set budgets and alerts, generate invoices and optimize their costs. Both Cloud Cruiser, an HP AllianceONE partner, and HP are conducting live demonstrations of the cost analytics platform this week at the HP Discover 2012 Conference in the Cloud Cruiser booth and the HP Cloud Zone.

The Cost Intelligence Platform is available for purchase directly from the company or through HP software partners, Seamless Technologies, and Pepperweed Consulting. Product information and pricing is available at www.cloudcruiser.com.

You may also be interested in:

Wednesday, June 6, 2012

Data explosion and big data demand new strategies for data management, backup and recovery, say experts

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: Quest Software.


Businesses clearly need a better approach to their data recovery capabilities -- across both their physical and virtualized environments. The current landscape for data management, backup, and disaster recovery (DR), too often ignores the transition from physical to virtualized environments, and sidesteps the heightened real-time role that data now plays in enterprise.

What's more, major trends like virtualization, big data, and calls for comprehensive and automated data management are also driving this call for change.

What's needed are next-generation, integrated, and simplified approaches, the fast backup and recovery that spans all essential corporate data. The solution therefore means bridging legacy and new data, scaling to handle big data, implementing automation and governance, and integrating the functions of backup protection and DR.

To share insights into why data recovery needs a new approach and how that can be accomplished, the next BriefingsDirect discussion joins two experts, John Maxwell, Vice President of Product Management for Data Protection at Quest Software, and Jerome Wendt, President and Lead Analyst of DCIG, an independent storage analyst and consulting firm. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Is data really a different thing than, say, five years ago in terms of how companies view it and value it?

Wendt: Absolutely. There's no doubt that companies are viewing it much more holistically. It used to be just data that was primarily in structured databases, or even semi-structured format, such as email, was where all the focus was. Clearly, in the last few years, we've seen a huge change, where unstructured data now is the fastest growing part of most enterprises and where even a lot of their intellectual property is stored. So I think there is a huge push to protect and mine that data.

But we're also just seeing more of a push to get to edge devices. We talk a lot about PCs and laptops, and there is more of a push to protect data in that area, but all you have to do is look around and see the growth.

When you go to any tech conference, you see iPads everywhere, and people are storing more data in the cloud. That's going to have an impact on how people and organizations manage their data and what they do with it going forward.

Gardner: Now, for more and more companies, data is the business, or at least the analytics that they derive from it.

Mission critical

Maxwell: It’s funny that you mention that, because I've been in the storage business for over 15 years. I remember just 10 years ago, when studies would ask people what percentage of their data was mission critical, it was maybe around 10 percent. That aligns with what you're talking about, the shift and the importance of data.

Recent surveys from multiple analyst groups have now shown that people categorize their mission-critical data at 50 percent. That's pretty profound, in that a company is saying half the data that we have, we can't live without, and if we did lose it, we need it back in less than an hour, or maybe in minutes or seconds.

Gardner: So how is the shift and the change in infrastructure impacting this simultaneous need for access and criticality?

Maxwell: Well, the biggest change from an infrastructure standpoint has been the impact of virtualization. This year, well over 50 percent of all the server images in the world are virtualized images, which is just phenomenal.

Quest has really been in the forefront of this shift in infrastructure. We have been, for example, backing up virtual machines (VMs) for seven years with our Quest vRanger product. We've seen that evolve from when VMs or virtual infrastructure were used more for test and development. Today, I've seen studies that show that the shops that are virtualized are running SQL Server, Microsoft Exchange, very mission-critical apps.

We have some customers at Quest that are 100 percent virtualized. These are large organizations, not just some mom and pop company. That shift to virtualization has really made companies assess how they manage it, what tools they use, and their approaches. Virtualization has a large impact on storage and how you backup, protect, and restore data.

Once you implement and have the proper tools in place, your virtual life is going to be a lot easier than your physical one from an IT infrastructure perspective. A lot of people initially moved to virtualization as a cost savings, because they had under-utilization of hardware. But one of the benefits of virtualization is the freedom, the dynamics. You can create a new VM in seconds. But then, of course, that creates things like VM sprawl, the amount of data continues to grow, and the like.

At Quest we've adapted and exploited a lot of the features that exist in virtual environments, but don't exist in physical environments. It’s actually easier to protect and recover virtual environments than it is physical, if you have tools that are exploiting the APIs and the infrastructure that exists in that virtual environment.

Significant benefits

Wendt: We talk a lot these days about having different silos of data. One application creates data that stays over here. Then, it's backed up separately. Then, another application or another group creates data back over here.

Virtualization not only means consolidation and cost savings, but it also facilitates a more holistic view into the environment and how data is managed. Organizations are finally able to get their arms around the data that they have.
Before, it was so distributed that they didn't really have a good sense of where it resided or how to even make sense of it. With virtualization, there are initial cost benefits that help bring it altogether, but once it's altogether, they're able to go to the next stage, and it becomes the business enabler at that point.

Gardner: The key now is to be able to manage, automate, and bring the comprehensive control and governance to this equation, not just the virtualized workloads, but also of course the data that they're creating and bringing back into business processes.
Once it's altogether, they're able to go to the next stage, and it becomes the business enabler at that point.


How do we move from sprawl to control and make this flip from being a complexity issue to a virtuous adoption and benefits issue?

Maxwell: Over the years, people had very manual processes. For example, when you brought a new application online or added hardware, server, and that type of thing, you asked, "Oops, did we back it up? Are we backing that up?"

One thing that’s interesting in a virtual environment is that the backup software we have at Quest will automatically see when a new VM is created and start backing it up. So it doesn't matter if you have 20 or 200 or 2,000 VMs. We're going to make sure they're protected.

Where it really gets interesting is that you can protect the data a lot smarter than you can in a physical environment. I'll give you an example.

In a VMware environment, there are services that we can use to do a snapshot backup of a VM. In essence, it’s an immediate backup of all the data associated with that machine or those machines. It could be on any generic kind of hardware. You don’t need to have proprietary hardware or more expensive software features of high-end disk arrays. That is a feature that we can exploit built within the hypervisor itself.

Image backup


E
ven the way that we move data is much more efficient, because we have a process that we pioneered at Quest called "backup once, restore many," where we create what's called image backup. From that image backup I can restore an entire system, individual file, or an application. But I've done that from that one path, that one very effective snapshot-based backup.

If you look at physical environments, there is the concept of doing physical machine backups and file level backups, specific application backups, and for some systems, you even have to employ a hardware-based snapshots, or you actually had to bring the applications down.

So from that perspective, we've gotten much more sophisticated in virtual environments. Again, we're moving data by not impacting the applications themselves and not impacting the VMs. The way we move data is very fast and is very effective.

Wendt:
One of the things we are really seeing is just a lot more intelligence going into this backup software. They're moving well beyond just “doing backups” any more. There's much more awareness of what data is included in these data repositories and how they're searched.
There's much more awareness of what data is included in these data repositories and how they're searched.


And also with more integration with platforms like VMware vSphere Operations, administrators can centrally manage backups, monitor backup jobs, and do recoveries. One person can do so much more than they could even a few years ago.

And really the expectation of organizations is evolving that they don’t want to necessarily want separate backup admin and system admin anymore. They want one team that manages their virtual infrastructure. That all kind of rolls up to your point where it makes it easy to govern, manage, and execute on corporate objectives.

Gardner: Is this really a case, John Maxwell, where we are getting more and paying less?

Maxwell: Absolutely. Just as the cost per gigabyte has gone down over the past decade, the effectiveness of the software and what it can do is way beyond what we had 10 years ago.

Simplified process

Today, in a virtual environment, we can provide a solution that simplifies the process, where one person can ensure that hundreds of VMs are protected. They can literally right-click and restore a VM, a file, a directory, or an application.

One of the focuses we have had at Quest, as I alluded earlier, is that there are a lot of mission-critical apps running on these machines. Jerome talked about email. A lot of people consider email one of their most mission-critical applications. And the person responsible for protecting the environment that Microsoft Exchange is running on, may not be an Exchange administrator, but maybe they're tasked with being able to recover Exchange.

That’s why we've developed technologies that allow you to go out there, and from that one image backup, restore an email conversation or an attachment email from someone’s mailbox. That person doesn’t have to be a guru with Exchange. Our job is to, behind the scenes, figure how to do this and make this available via a couple of mouse-clicks.

Wendt: As John was speaking, I was going to comment. I spoke to a Quest customer just a few weeks ago. He clearly had some very specific technical skills, but he's responsible for a lot of things, a lot of different functions -- server admin, storage admin, backup admin.
You have to try to juggle everything, while you're trying to do your job, with backup just being one of those tasks.


I think a lot of individuals can relate to this guy. I know I certainly did, because that was my role for many years, when I was an administrator in the police department. You have to try to juggle everything, while you're trying to do your job, with backup just being one of those tasks.

In his particular case, he was called upon to do a recovery, and, to John’s point, it was an Exchange recovery. He never had any special training in Exchange recovery, but it just happened that he had Quest Software in place. He was able to use its FastRecover product to recover his Microsoft Exchange Server and had it back up and going in a few hours.

What was really amazing, in this particular case, is that he was traveling at the time it happened. So he had to talk to his manager through the process, and was able to get it up and going. Once he had the system up, he was able to log on and get it going fairly quickly.

That just illustrates how much the world has changed and how much backup software and these products have evolved to the point where you need to understand your environment, probably more than you need to understand the product, and just find the right product for your environment. In this case, this individual clearly accomplished that.

Gardner: How do organizations approach this being in a hybrid sort of a model, between physical and virtual, and recognizing that different apps have different criticality for their data, and that might change?

Maxwell: Well, there are two points. One, we can't have a bunch of niche tools, one for virtual, one for physical, and the like. That's why, with our vRanger product, which has been the market leader in virtual data protection for the past seven years, we're coming out with physical support in that product in the fall of 2012. Those customers are saying, "I want one product that handles that non-virtualized data."

The second part gets down to what percentage of your data is mission-critical and how complex it is, meaning is it email, or a database, or just a flat file, and then asking if these different types of data have specific service-level agreements (SLAs), and if you have products that can deliver on those SLAs.

That's why at Quest, we're really promoting a holistic approach to data protection that spans replication, continuous data protection, and more traditional backup, but backup mainly based on snapshots.

Then, that can map to the service level, to your business requirements. I just saw some data from an industry analyst that showed the replication software market is basically the same size now as the backup software market. That shows the desire for people to have kind of that real-time failover for some application, and you get that with replication.
We can't have a bunch of niche tools, one for virtual, one for physical, and the like.


When it comes to the example that Jerome gave with that customer, the Quest product that we're using is NetVault FastRecover, which is a continuous data protection product. It backs up everything in real-time. So you can go back to any point in time.

It’s almost like a time machine, when it comes to putting back that mailbox, the SQL database, or Oracle database. Yet, it's masking a lot of the complexity. So the person restoring it may not be a DBA. They're going to be that jack of all trades who's responsible for the storage and maybe backup overall.
Gardner: John, in talking with Quest folks, I've heard them refer to a next-generation platform or approach, or a whole greater than the sum of the parts. How do you define next generation when it comes to data recovery in your view of the world?

New benefits

Maxwell: Well, without hyperbole, for us, our next generation is a new platform that we call NetVault Extended Architecture (XA), and this is a way to provide several benefits to our customers.

One is that with NetVault Extended Architecture we now are delivering a single user experience across products. So this gets into SMB-versus-enterprise for a customer that’s using maybe one of our point solutions for application or database recovery, providing that consistent look and feel, consistent approach. We have some customers that use multiple products. So with this, they now have a single pane of glass.

Also, it's important to offer a consistent means for administering and managing the backup and recovery process, because as we've been talking, why should a person have to have multiple skill sets? If you have one view, one console into data protection, that’s going to make your life a lot easier than have to learn a bunch of other types of solutions.

That’s the immediate benefit that I think people see. What NetVault Extended Architecture encompasses under the covers, though, is a really different approach in the industry, which is modularization of a lot of the components to backup and recovery and making them plug and play.

Let me give you an example. With the increase in virtualization a lot of people just equate virtualization with VMware. Well, we've got Hyper-V. We have initiatives from Red Hat. We have Xen, Oracle, and others. Jerome, I'm kind of curious about your views, but just as we saw in the 90s and in the 00s, with people having multiple platforms, whether it's Windows and Linux or Windows and Linux and, as you said, AIX, I believe we are going to start seeing multiple hypervisors.
It's important to offer a consistent means for administering and managing the backup and recovery process


So one of the approaches that NetVault Extended Architecture is going to bring us is a capability to offer a consistent approach to multiple hypervisors, meaning it could be a combination of VMware and Microsoft Hyper-V and maybe even KVM from Red Hat.

But, again, the administrator, the person who is managing the backup and recovery, doesn’t have to know any one of those platforms. That’s all hidden from them. In fact, if they want to restore data from one of those hypervisors, say restore a VMware as VMDK, which is their volume in VMware speak, into what's called a VHD and a Hyper-V, they could do that.

That, to me, is really exciting, because this is exploiting these new platforms and environments and providing tools that simplify the process. But that’s going to be one of the many benefits of our new NetVault Extended Architecture next generation, where we can provide that singular experience for our customer base to have a faster go-to-market, faster time to market, with new solutions, and be able to deliver in a modular approach.

Customers can choose what they need, whether they're an SMB customer, or one of the largest customers that we have with hundreds of petabytes or exabytes of data.

Wendt: DCIG has a lot of conversations with managed-service providers, and you'd be surprised, but there are actually very few that are VMware shops. I find the vast majority are actually either Microsoft Hyper-V or using Red Hat Linux as their platform, because they're looking for a cost-effective way to deliver virtualization in their environments.

We've seen this huge growth in replication, and people want to implement disaster recovery plans or business continuity planning. I think this ability to recover across different hypervisors is going to become absolutely critical, maybe not today or tomorrow, but I would say in the new few years. People are going to say, "Okay, now that we've got our environment virtualized, we can recover locally, but how about recovering into the cloud or with a cloud service provider? What options do we have there?"

More choice

If they're using VMware and their provider isn’t, they're almost forced to use VMware or something like this, whereas your platform gives them much more choice for managed service providers that are using platforms other than VMware. It sounds like Quest will really give them the ability to backup VMware hypervisors and then potentially recover into Red Hat or Microsoft Hyper-V at MSPs. So that could be a really exciting development for Quest in that area.

Gardner: Jerome, do you have any use cases or examples that you're familiar with that illustrate this concept of next-generation and lifecycle approach to data recovery that we have been discussing?

Wendt: Well, it’s not an example, just a general trend I am seeing in products, because most of DCIG’s focus is just on analyzing the products themselves and comparing, traversing, and identifying general broader trends within those products.
Going forward, the industry is probably going to have to find a better way to refer to these products. Quest is a whole lot more than just running a backup.


There are two things we're seeing. One, we're struggling calling backup software backup software anymore, because it does so much more than that. You mentioned earlier about so much more intelligence in these products. We call it backup software, because that’s the context in which everyone understands it, but I think going forward, the industry is probably going to have to find a better way to refer to these products. Quest is a whole lot more than just running a backup.

And then second, people, as they view backup and how they manage their infrastructure, really have to go from this reactive, "Okay, today I am going to have to troubleshoot 15 backup jobs that failed overnight." Those days are over. And if they're not over, you need to be looking for new products that will get you over that hump, because you should no longer be troubleshooting failed backup jobs.

You should be really looking more toward, how you can make sure all your environment is protected, recoverable, and really moving to the next phase of doing disaster recoveries and business continuity planning. The products are there. They are mature and people should be moving down that path.

Crystal ball

Gardner: John, when we look into the crystal ball, even not that far out, it just seems that in order to manage what you need to do as a business, getting good control over your data, being able to ensure that it’s going to be available anytime, anywhere, regardless of the circumstances is, again, not a luxury, it’s not a nice to have. It’s really just going to support the viability of the business.

Maxwell: Absolutely. And what’s going to make it even more complex is going to be the cloud, because what's your control, as a business, over data that is hosted some place else?

I know that at Quest we use seven SaaS-based applications from various vendors, but what’s our guarantee that our data is protected there? I can tell you that a lot of these SaaS-based companies or hosting companies may offer an environment that says, "We're always up," or "We have a higher level of availability," but most recovery is based on logical corruption of data.

As I said, with some of these smaller vendors, you wonder about what if they went out of business, because I have heard stories of small service providers closing the doors, and you say, "But my data is there."

So the cloud is really exciting, in that we're looking at how we're going to protect assets that may be off-premise to your environment and how we can ensure that you can recover that data, in case that provider is not available.

Then there's something that Jerome touched upon, which is that the cloud is going to offer so many opportunities, the one that I am most excited about is using the cloud for failover. That really getting beyond recovery into business continuity.
Not only can we recover your data within seconds, but we can get your business back up and running, from an IT perspective, faster than you probably ever presumed that you could.


And something that has only been afforded by the largest enterprises, Global 1000 type customers, is the ability to have a stand up center, a SunGard or someone like that, which is very costly and not within reach of most customers. But with virtualization and with the cloud, there's a concept that I think we're going to see become very mainstream over the next five years, which is failover recovery to the cloud. That's something that’s going to be within reach of even SMB customers, and that’s really more of a business continuity message.

So now we're stepping up even more. We're now saying, "Not only can we recover your data within seconds, but we can get your business back up and running, from an IT perspective, faster than you probably ever presumed that you could."
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: Quest Software.
You may also be interested in:

Tuesday, June 5, 2012

Corporate data, supply chains remain vulnerable to cyber crime attacks, says Open Group conference speaker

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on how security impacts the enterprise architecture, enterprise transformation, and global supply chain activities in organizations, both large and small.

We're now joined on the security front with one of the main speakers at the conference, Joel Brenner, the author of "America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare."

Joel is a former Senior Counsel at the National Security Agency (NSA), where he advised on legal and policy issues relating to network security. Mr. Brenner currently practices law in Washington at Cooley LLP, specializing in cyber security. Registration remains open for The Open Group Conference in Washington, DC beginning July 16.

Previously, he served as the National Counterintelligence Executive in the Office of the Director of National Intelligence, and as the NSA’s Inspector General. He is a graduate of University of Wisconsin–Madison, the London School of Economics, and Harvard Law School. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Your book came out last September and it affirmed this notion that the United States, or at least open Western cultures and societies, are particularly vulnerable to being infiltrated, if you will, from cybercrime, espionage, and dirty corporate tricks.

Why are we particularly vulnerable, when we should be most adept at using cyber activities to our advantage?

Brenner: Let’s make a distinction here between the political-military espionage that's gone on since pre-biblical times and the economic espionage that’s going on now and, in many cases, has nothing at all to do with military, defense, or political issues.

The other stuff has been going on forever, but what we've seen in the last 15 or so years is a relentless espionage attack on private companies for reasons having nothing to do with political-military affairs or defense.

So the countries that are adept at cyber, but whose economies are relatively undeveloped compared to ours, are at a big advantage, because they're not very lucrative targets for this kind of thing, and we are. Russia, for example, is paradoxical. While it has one of the most educated populations in the world and is deeply cultured, it has never been able to produce a commercially viable computer chip.

Not entrepreneurial


We’re not going to Russia to steal advanced technology. We’re not going to China to steal advanced technology. They're good at engineering and they’re good at production, but so far, they have not been good at making themselves into an entrepreneurial culture.

That’s one just very cynical reason why we don't do economic espionage against the people who are mainly attacking us, which are China, Russia, and Iran. I say attack in the espionage sense.

The other reason is that you're stealing intellectual property when you’re doing economic espionage. It’s a bedrock proposition of American economics and political strategy around the world to defend the legal regime that protects intellectual property. So we don’t do that kind of espionage. Political-military stuff we're real good at.

Gardner: Wouldn’t our defense rise to the occasion? Why hasn't it?

Brenner: The answer has a lot to do with the nature of the Internet and its history. The Internet, as some of your listeners will know, was developed starting in the late '60s by the predecessor of the Defense Advanced Research Projects Agency (DARPA), a brilliant operation which produced a lot of cool science over the years.
The people who invented this, if you talk to them today, lament the fact that they didn't build a security layer into it.


It was developed for a very limited purpose, to allow the collaboration of geographically dispersed scientists who worked under contract in various universities with the Defense Department's own scientists. It was bringing dispersed brainpower to bear.

It was a brilliant idea, and the people who invented this, if you talk to them today, lament the fact that they didn't build a security layer into it. They thought about it. But it wasn't going to be used for anything else but this limited purpose in a trusted environment, so why go to the expense and aggravation of building a lot of security into it?

Until 1992, it was against the law to use the Internet for commercial purposes. Dana, this is just amazing to realize. That’s 20 years ago, a twinkling of an eye in the history of a country’s commerce. That means that 20 years ago, nobody was doing anything commercial on the Internet. Ten years ago, what were you doing on the Internet, Dana? Buying a book for the first time or something like that? That’s what I was doing, and a newspaper.

In the intervening decade, we’ve turned this sort of Swiss cheese, cool network, which has brought us dramatic productivity and all and pleasure into the backbone of virtually everything we do.

International finance, personal finance, command and control of military, manufacturing controls, the controls in our critical infrastructure, all of our communications, virtually all of our activities are either on the Internet or exposed to the Internet. And it’s the same Internet that was Swiss cheese 20 years ago and it's Swiss cheese now. It’s easy to spoof identities on it.

So this gives a natural and profound advantage to attack on this network over defense. That’s why we’re in the predicament we're in.

Both directions


Gardner: Let’s also look at this notion of supply chain, because corporations aren’t just islands unto themselves. A business is really a compendium of other businesses, products, services, best practices, methodologies, and intellectual property that come together to create a value add of some kind. It's not just attacking the end point, where that value is extended into the market. It’s perhaps attacking anywhere along that value chain.

What are the implications for this notion of the ecosystem vulnerability versus the enterprise vulnerability?

Brenner: Well, the supply chain problem really is rather daunting for many businesses, because supply chains are global now, and it means that the elements of finished products have a tremendous numbers of elements. For example, this software, where was it written? Maybe it was written in Russia -- or maybe somewhere in Ohio or in Nevada, but by whom? We don’t know.

There are two fundamental different issues for supply chain, depending on the company. One is counterfeiting. That’s a bad problem. Somebody is trying to substitute shoddy goods under your name or the name of somebody that you thought you could trust. That degrades performance and presents real serious liability problems as a result.
The supply chain problem really is rather daunting for many businesses, because supply chains are global now, and it means that the elements of finished products have a tremendous numbers of elements.


The other problem is the intentional hooking, or compromising, of software or chips to do things that they're not meant to do, such as allow backdoors and so on in systems, so that they can be attacked later. That’s a big problem for military and for the intelligence services all around the world.

The reason we have the problem is that nobody knows how to vet a computer chip or software to see that it won't do these squirrelly things. We can test that stuff to make sure it will do what it's supposed to do, but nobody knows how to test the computer chip or two million lines of software reliably to be sure that it won’t also do certain things we don't want it to do.

You can put it in a sandbox or a virtual environment and you can test it for a lot of things, but you can't test it for everything. It’s just impossible. In hardware and software, it is the strategic supply chain problem now. That's why we have it.

If you have a worldwide supply chain, you have to have a worldwide supply chain management system. This is hard and it means getting very specific. It includes not only managing a production process, but also the shipment process. A lot of squirrelly things happen on loading docks, and you have to have a way not to bring perfect security to that -- that's impossible -- but to make it really harder to attack your supply chain.

Notion of cost

Gardner: So many organizations today, given the economy and the lagging growth, have looked to lowest cost procedures, processes, suppliers, materials, and aren't factoring in the risk and the associated cost around these security issues. Do people need to reevaluate cost in the supply chain by factoring in what the true risks are that we’re discussing?

Brenner: Yes, but of course, when the CEO and the CFO get together and start to figure this stuff out, they look at the return on investment (ROI) of additional security. It's very hard to be quantitatively persuasive about that. That's one reason why you may see some kinds of production coming back into the United States. How one evaluates that risk depends on the business you're in and how much risk you can tolerate.

This is a problem not just for really sensitive hardware and software, special kinds of operations, or sensitive activities, but also for garden-variety things.
This is a problem not just for really sensitive hardware and software, special kinds of operations, or sensitive activities, but also for garden-variety things.


Gardner: We’ve seen other aspects of commerce in which we can't lock down the process. We can’t know all the information, but what we can do is offer deterrence, perhaps in the form of legal recourse, if something goes wrong, if in fact, decisions were made that countered the contracts or were against certain laws or trade practices.

Brenner: For a couple of years now, I’ve struggled with the question why it is that liability hasn’t played a bigger role in bringing more cyber security to our environment, and there are a number of reasons.

We've created liability for the loss of personal information, so you can quantify that risk. You have a statute that says there's a minimum damage of $500 or $1,000 per person whose identifiable information you lose. You add up the number of files in the breach and how much the lawyers and the forensic guys cost and you come up with a calculation of what these things cost.

But when it comes to just business risk, not legal risk, and the law says intellectual property to a company that depends on that intellectual property, you have a business risk. You don’t have much of a legal risk at this point.

You may have a shareholder suit issue, but there hasn’t been an awful lot of that kind of litigation so far. So I don't know. I'm not sure that’s quite the question you were asking me, Dana.

Gardner: My follow on to that was going to be where would you go to sue across borders anyway? Is there an über-regulatory or legal structure across borders to target things like supply chain, counterfeit, cyber espionage, or mistreatment of business practice?

Depends on the borders


Brenner: It depends on the borders you're talking about. The Europeans have a highly developed legal and liability system. You can bring actions in European courts. So it depends what borders you mean.

If you’re talking about the border of Russia, you have very different legal issues. China has different legal issues, different from Russia, as well from Iran. There are an increasing number of cases where actions are being brought in China successfully for breaches of intellectual property rights. But you wouldn't say that was the case in Nigeria. You wouldn't say that was the case in a number of other countries where we’ve had a lot of cybercrime originating from.

So there's no one solution here. You have to think in terms of all kinds of layered defenses. There are legal actions you can take sometimes, but the fundamental problem we’re dealing with is this inherently porous Swiss-cheesy system. In the long run, we're going to have to begin thinking about the gradual reengineering of the way the Internet works, or else this basic dynamic, in which lawbreakers have advantage over law-abiding people, is not going to go away.

Think about what’s happened in cyber defenses over the last 10 years and how little they've evolved -- even 20 years for that matter. They almost all require us to know the attack mode or the sequence of code in order to catch it. And we get better at that, but that’s a leapfrog business. That’s fundamentally the way we do it.

Whether we do it at the perimeter, inside, or even outside before the attack gets to the perimeter, that’s what we’re looking for -- stuff we've already seen. That’s a very poor strategy for doing security, but that's where we are. It hasn’t changed much in quite a long time and it's probably not going to.
We’re talking about the Balkanization of the Internet. I think that's going to happen as more companies demand a higher level of protection.


Gardner: Why is that the case? Is this not a perfect opportunity for a business-government partnership to come together and re-architect the Internet at least for certain types of business activities, permit a two-tier approach, and add different levels of security into that? Why hasn’t it gone anywhere?

Brenner: What I think you’re saying is different tiers or segments. We’re talking about the Balkanization of the Internet. I think that's going to happen as more companies demand a higher level of protection, but this again is a cost-benefit analysis. You’re going to see even more Balkanization of the Internet as you see countries like Russia and China, with some success, imposing more controls over what can be said and done on the Internet. That’s not going to be acceptable to us.

Gardner: We’ve seen a lot with cloud computing and more businesses starting to go to third-party cloud providers for their applications, services, data storage, even integration to other business services and so forth.

More secure

If there's a limited lumber, or at least a finite number, of cloud providers and they can institute the proper security and take advantage of certain networks within networks, then wouldn’t that hypothetically make a cloud approach more secure and more managed than every-man-for-himself, which is what we have now in enterprises and small to medium-sized businesses (SMBs)?

Brenner: I think the short answer is, yes. The SMBs will achieve greater security by basically contracting it out to what are called cloud providers. That’s because managing the patching of vulnerabilities and other aspects and encryption is beyond what’s most small businesses and many medium-sized businesses can do, are willing to do, or can do cost-effectively.

For big businesses in the cloud, it just depends on how good the big businesses’ own management of IT is as to whether it’s an improvement or not. But there are some problems with the cloud.

People talk about security, but there are different aspects of it. You and I have been talking just now about security meaning the ability to prevent somebody from stealing or corrupting your information. But availability is another aspect of security. By definition, putting everything in one remote place reduces robustness, because if you lose that connection, you lose everything.

Consequently, it seems to me that backup issues are really critical for people who are going to the cloud. Are you going to rely on your cloud provider to provide the backup? Are you going to rely on the cloud provider to provide all of your backup? Are you going to go to a second cloud provider? Are you going to keep some information copied in-house?
By definition, putting everything in one remote place reduces robustness, because if you lose that connection, you lose everything.


What would happen if your information is good, but you can’t get to it? That means you can’t get to anything anymore. So that's another aspect of security people need to think through.

Gardner: How do you know you’re doing the right thing? How do you know that you're protecting? How do you know that you've gone far enough to ameliorate the risk?

Brenner: This is really hard. If somebody steals your car tonight, Dana, you go out to the curb or the garage in the morning, and you know it's not there. You know it’s been stolen.

When somebody steals your algorithms, your formulas, or your secret processes, you've still got them. You don’t know they’re gone, until three or four years later, when somebody in Central China or Siberia is opening a factory and selling stuff into your market that you thought you were going to be selling -- and that’s your stuff. Then maybe you go back and realize, "Oh, that incident three or four years ago, maybe that's when that happened, maybe that’s when I lost it."

What's going out

S
o you don’t even know necessarily when things have been stolen. Most companies don’t do a good job. They’re so busy trying to find out what’s coming into their network, they're not looking at what's going out.

That's one reason the stuff is hard to measure. Another is that ROI is very tough. On the other hand, there are lots of things where business people have to make important judgments in the face of risks and opportunities they can't quantify, but we do it.

We’re right to want data whenever we can get it, because data generally means we can make better decisions. But we make decisions about investment in R&D all the time without knowing what the ROI is going to be and we certainly don't know what the return on a particular R&D expenditure is going to be. But we make that, because people are convinced that if they don't make it, they’ll fall behind and they'll be selling yesterday’s products tomorrow.

Why is it that we have a bias toward that kind of risk, when it comes to opportunity, but not when it comes to defense? I think we need to be candid about our own biases in that regard, but I don't have a satisfactory answer to your question, and nobody else does either. This is one where we can't quantify that answer.

Gardner: It sounds as if people need to have a healthy dose of paranoia to tide them over across these areas. Is that a fair assessment?
People need to understand, without actually being paranoid, that life is not always what it seems. There are people who are trying to steal things from us all the time, and we need to protect ourselves.


Brenner: Well, let’s say skepticism. People need to understand, without actually being paranoid, that life is not always what it seems. There are people who are trying to steal things from us all the time, and we need to protect ourselves.

In many companies, you don't see a willingness to do that, but that varies a great deal from company to company. Things are not always what they seem. That is not how we Americans approach life. We are trusting folks, which is why this is a great country to do business in and live in. But we're having our pockets picked and it's time we understood that.

Gardner: And, as we pointed out earlier, this picking of pockets is not just on our block, but could be any of our suppliers, partners, or other players in our ecosystem. If their pockets get picked, it ends up being our problem too.

Brenner: Yeah, I described this risk in my book, “America the Vulnerable,” at great length and in my practice, here at Cooley, I deal with this every day. I find myself, Dana, giving briefings to businesspeople that 5, 10, or 20 years ago, you wouldn’t have given to anybody who wasn't a diplomat or a military person going outside the country. Now this kind of cyber pilferage is an aspect of daily commercial life, I'm sorry to say.
Register for The Open Group Conference
July 16-18 in Washington, D.C.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Tuesday, May 22, 2012

SAP gets huge cloud and extended business process boost with Ariba acquisition

SAP on Tuesday announced its intention to buy Ariba for $4.3 billion, a 19 percent premium on Ariba's market capitalization.

The move comes soon after SAP's SuccessFactors February buy and shows that SAP is quickly and aggressively acquiring its way to a full cloud business services capability. The announcement caps SAP's user conference last week and the cloud and data services news from it, including cloud suite offerings like SAP Business ByDesign and SAP Business One.

It will become a question in the market if SAP will favor it's own ERP technologies and installed base, or continue Ariba's strategy of inclusive and open alliances and partnerships.

Ariba has been growing rapidly through organic and acquisitions expansions, and has a global reach for its procurement, goods/services trading, spend management, supplier discovery and other extended enterprise business processes and services offerings. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

Ariba has been open and partnered with all major ERP suppliers -- including SAP, Salesforce, IBM and Oracle, but not Workday. And Ariba recently announced a partnership with Microsoft Dynamics. It will become a question in the market if SAP will favor it's own ERP technologies and installed base, or continue Ariba's strategy of inclusive and open alliances and partnerships.

I personally think SAP should keep Ariba open to grow the cloud business services market, and treat all IT business services suppliers on equal footing, and therefore best support the most enterprises and suppliers. SAP should keep its ERP products and tactics separate from Ariba, and allow users to adopt a cloud-first approach, regardless of their on-premises or private cloud technologies.

Looks like a plan

It looks at this point like that is the plan. The combined companies plan to consolidate all cloud-related supplier assets of SAP under Ariba. The existing management team will continue to lead Ariba, which will operate as an independent business under the name “Ariba, an SAP company.” The SAP Executive Board intends to nominate Ariba CEO Bob Calderoni to the SAP Global Managing Board.

Clearly, SAP is focused on global cloud growth opportunities, but is wisely defining cloud as a place to do business and extend socially amplified discovery and collaboration efficiencies. Business returns on cloud services may well come more from enabling new business processes across organizational boundaries, than in retrofitting older software as services. SAP will also be able to make more alliances with the next generation of ISVs through an Ariba community approach.

Ariba is describing the combination with SAP as creating "the Amazon.com and Facebook for businesses all in one." That certainly is the potential. SAP is, and this has not always been the case with Walldorf, skating to where the hockey puck is going to be in buying Ariba.

Clearly, SAP is focused on global cloud growth opportunities, but is wisely defining cloud as a place to do business and extend socially amplified discovery and collaboration efficiencies.

SAP and Ariba can "deliver a truly end-to-end solution that enables companies to achieve a closed-loop from source-to-pay, regardless of whether they deploy in the cloud, on-premise or both," said the companies. Ariba network should also benefit from SAP’s flagship in-memory platform SAP HANA for improved data processing and analytics benefits.

With $444 million in total revenue, Ariba had 38.5 percent annual growth in 2011. Its business network recorded 62 percent organic growth in 2011. SAP’s global customer base of more than 190,000 companies includes the largest buyers and sellers in the world.

The acquisition is expected to close in Q3. Ariba's board unanimously approved the deal.

You may also be interested in:

Wednesday, May 16, 2012

Searching for data scientists as a service

This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is senior analyst at Ovum.

By Tony Baer

It’s no secret that rocket .. err … data scientists are in short supply. The explosion of data and the corresponding explosion of tools, and the knock-on impacts of Moore’s and Metcalfe’s laws, is that there is more data, more connections, and more technology to process it than ever. At last year’s Hadoop World, there was a feeding frenzy for data scientists, which only barely dwarfed demand for the more technically oriented data architects. In English, that means:

1. Potential MacArthur Grant recipients who have a passion and insight for data, the mathematical and statistical prowess for ginning up the algorithms, and the artistry for painting the picture that all that data leads to. That’s what we mean by data scientists.

2. People who understand the platform side of Big Data, a.k.a., data architect or data engineer.

The data architect side will be the more straightforward nut to crack. Understanding big data platforms (Hadoop, MongoDB, Riak) and emerging Advanced SQL offerings (Exadata, Netezza, Greenplum, Vertica, and a bunch of recent upstarts like Calpont) is a technical skill that can be taught with well-defined courses. The laws of supply and demand will solve this one – just as they did when the dot com bubble created demand for Java programmers back in 1999.

Behind all the noise for Hadoop programmers, there’s a similar, but quieter desperate rush to recruit data scientists. While some data scientists call data scientist a buzzword, the need is real.

It’s all about connecting the dots, not as easy as it sounds.

However, data science will be a tougher number to crack. It’s all about connecting the dots, not as easy as it sounds. The V’s of big data – volume, variety, velocity, and value — require someone who discovers insights from data; traditionally, that role was performed by the data miner. But data miners dealt with better-bounded problems and well-bounded (and known) data sets that made the problem more 2-dimensional.

The variety of Big Data – in form and in sources – introduces an element of the unknown. Deciphering Big Data requires a mix of investigative savvy, communications skills, creativity/artistry, and the ability to think counter-intuitively. And don’t forget it all comes atop a foundation of a solid statistical and machine learning background plus technical knowledge of the tools and programming languages of the trade.

Sometimes it seems like we’re looking for Albert Einstein or somebody smarter.

Nature abhors a vacuum

As nature abhors a vacuum, there’s also a rush to not only define what a data scientist is, but develop programs that could somehow teach it, software packages that to some extent package it, and otherwise throw them into a meat … err, the free market. EMC and other vendors are stepping up to the plate to offer training, not just on platforms, but for data science. Kaggle offers an innovative cloud-based, crowdsourced approach to data science, making available a predictive modeling platform and then staging sponsored 24-hour competitions for moonlighting data scientists to devise the best solutions to particular problems (redolent of the Netflix $1 million prize to devise a smarter algorithm for predicting viewer preferences).

With data science talent scarce, we’d expect that consulting firms would buy up talent that could then be “rented’ to multiple clients. Excluding a few offshore firms, few systems integrators (SIs) have yet stepped up to the plate to roll out formal big data practices (the logical place where data scientists would reside), but we expect that to change soon.

Opera Solutions, which has been in the game of predictive analytics consulting since 2004, is taking the next step down the packaging route. having raised $84 million in Series A funding last year, the company has staffed up to nearly 200 data scientists, making it one of the largest assemblages of genius this side of Google. Opera’s predictive analytics solutions are designed for a variety of platforms, SQL and Hadoop, and today they join the SAP Sapphire announcement stream with a release of their offering on the HANA in-memory database. Andrew Brust provides a good drilldown on the details on this announcement.

With market demand, there will inevitably be a watering down of the definition of data scientists so that more companies can claim they’ve got one… or many.

From SAP’s standpoint, Opera’s predictive analytics solutions are a logical fit for HANA as they involve the kinds of complex problems (e.g., a computation triggers other computations) that their new in-memory database platform was designed for.

There’s too much value at stake to expect that Opera will remain the only large aggregation of data scientists for hire. But ironically, the barriers to entry will keep the competition narrow and highly concentrated. Of course, with market demand, there will inevitably be a watering down of the definition of data scientists so that more companies can claim they’ve got one… or many.

The laws of supply and demand will kick in for data scientists, but the ramp up of supply won’t be as quick as that for the more platform-oriented data architect or engineer. Of necessity, that supply of data scientists will have to be augmented by software that automates the interpretation of machine learning, but there’s only so far that you can program creativity and counter-intuitive insight into a machine.

This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is senior analyst at Ovum.

You may also be interested in:

Tuesday, May 15, 2012

MuleSoft suite of tools eases way for SaaS integration in the cloud

MuleSoft this week launched Mule iON SaaS Edition, providing a broad set of new tools and services for swift software-as-a -Service (SaaS) integration in the cloud, and lowering the barrier to SaaS adoption for SaaS providers and developers.

The Mule iON integration platform as a service (iPaaS) connects across cloud-based applications and also connects SaaS to on-premise applications. MuleSoft's Anypoint technology for on-demand API connectivity eliminates the need for copious custom point-to-point code, said MuleSoft. [Disclosure: MuleSoft is a sponsor of BriefingsDirect podcasts.]

In recent commentary, Ross Mason, founder and CTO of Mulesoft, said, "The world today is moving at lightning speed to SaaS and cloud applications, and the idea of gaining competitive advantage through legacy enterprise applications is no longer relevant."

I agree. Key differentiators less involve building applications now than in the effective composition of services. Cloud and SaaS providers need to give their clients better means to leverage APIs and craft business processes across both enterprise and multiple Saas provider boundaries. This rationalization of cloud services stew is the new integration nut to crack.

The problem is, what type of platform and organizations can fulfill the role of cloud services orchestration hub? The role may not fit well for any one SaaS provider, nor any single or cadre of enterprises. For the time being, a best of breed platform and supporting ecosystem must evolve, and then the market will decide on who or what will be the acceptable hub mechanisms.

And the market for cloud integration technologies is clearly heating up. Also this week, FuseSource unveilved at CamelOne in Boston the Fuse ESB Enterprise 7.0 and Fuse MQ Enterprise 7.0 products to general availability. These platforms enable "Integration Everywhere," says FuseSource, with modular, open source products based on Apache Software Foundation projects. [Disclosure: FuseSource is a sponsor of BriefingsDirect podcasts.]

QuickStart Plan


Integration platform provider MuleSoft also unveiled on Monday a new QuickStart Plan for fast growth SaaS vendors and systems integrators (SIs) that enables them to build their own revenue-generating integration apps on the Mule iON cloud platform in just a few days. Pricing for Mule iON SaaS Edition is based on a per month, volume of use basis, not based on connectivity, encouraging more connections over time.

On other integration news, SAP today said it plans to offer its own cloud-based integration technology, and also plans to enable its ecosystem of partners, including solutions from Mulesoft.

New features available with Mule iON SaaS Edition, which is available now, include:
  • Graphical data mapping and transformation capabilities enable SaaS vendors and SIs to build and deploy integration apps without writing custom code by using the Mule Studio drag-and-drop interface.
    The dark side of SaaS and Cloud is that while they are relatively easy to procure and deploy, it is difficult to integrate them with existing enterprise applications and other SaaS offerings.
  • Cloud Connector ToolKit creates new cloud connectors in Mule Studio for any public or private Web API.
  • Customer self-service portals allow customers to independently manage integrations, minimizing dependency on developers and reducing support calls.
  • SaaS Operations Center provides complete visibility into end user environments with a multi-tenant portal to monitor, manage and maintain integration apps, including:

    • Operational dashboards: deliver better customer support with live integration status and performance metrics.
    • Real-time notifications: meet availability requirements and improve service level agreements (SLAs) with immediate notifications for events or performance issues as they occur.
    • Proactive alerts: reduce support calls by proactively monitoring and addressing issues before they impact customers.
In addition, Mule iON SaaS Edition introduces a gallery of over 20 packaged integration apps and more than 100 Cloud Connectors for the most common integration use cases.

Opportunities for everyone

Ovum's Carter Lusher sees opportunities for everyone involved:
The dark side of SaaS and Cloud is that while they are relatively easy to procure and deploy, it is difficult to integrate them with existing enterprise applications and other SaaS offerings. What makes integration even more challenging is the proliferation of SaaS deployed within an organisation as line-of-business managers procure point solutions to their specific needs that really should be integrated with other systems in order to maximize value and manageability.
This becomes a challenge for IT and the vendors who are faced with a plethora of public and private APIs that require brute force to integrate. Integration is expensive, with estimates of $8 of integration work for every $1 of SaaS subscription or software license.
For systems integrators, Mule iON SaaS Edition offers the ability to create reusable connectors for a variety of horizontal and industry-specific applications and SaaS.
For SaaS and traditional enterprise applications, MuleSoft’s Mule iON SaaS Edition offers the ability to create pre-packaged integration modules that will give them a compelling story during the sales cycle without dramatically increasing costs or long-term maintenance. For example, HR talent management SaaS vendor PeopleMatter used Mule iON to create a new hire onboard module that connects with ADP payroll processing through ADP’s private APIs.
For systems integrators, Mule iON SaaS Edition offers the ability to create reusable connectors for a variety of horizontal and industry-specific applications and SaaS. This not only reduces the cost of integrations, which can be a competitive advantage in a sales cycle, but also gives the SI the opportunity to sell more value-added consulting as the focus of sales discussion moves away from brute force integration to maximizing the business value of enterprise applications or SaaS.
In other news, MuleSoft announced a record quarter in Q1 2012, achieving a 109 percent increase in bookings year over year, the privately held San Francisco company said. This was driven by new customer wins among major companies and key SaaS vendor partnerships added in Q1 include Avalara and Zuora. Additionally, the company reported a strong customer renewal rate of 95 percent.

You may also be interested in: