Monday, December 10, 2012

Multi-device tool architecture from Embarcadero primes pump for accelerated enterprise mobile development for 2013

The modern class of C and C++ tools are workhorses of PC applications development. And Objective-C tools have proven the rapid application development means of choice for native mobile development for iOS and Mac OS X.

So wouldn't it be nice to let the developers with the skills and proficiency in building native applications for the prominent enterprise computing clients of yesteryear (like Windows) gain ease in bringing better apps to all the mobile and fat client types demanded for the foreseeable future?

Embarcadero Technologies thought so, and long enough ago that they began re-architecting their compiler and C++ Builder development architecture in time to now provide write-once, run-natively-anywhere-that-counts benefits. [Disclosure: Embarcadero Technologies is a sponsor of BriefingsDirect podcasts.]
And now is when it really counts, with the advent of Windows 8, growing Mac OS X use and exploding sales of iOS and Android clients.


And now is when it really counts, with the advent of Windows 8, growing Mac OS X use and exploding sales of iOS and Android clients.

Embarcadero on Monday made generally available C++Builder XE3, which allows a common development effort to natively target -- using a new 64-bit compiler -- Windows 8, Mac OS X and Intel (not yet ARM) clients. And coming this summer, the same compiler outputs to run those same apps natively on iOS and Android mobile clients. ARM support comes at end of 2013.

What's more, more of the Embarcadero stable of tools and IDEs will leverage the architecture. So more tools to build more apps once that run on more devices natively. The compiler architecture is extensible to make more tools that make more code more extensible to more platforms. Almost rhymes.

Vision to close chasms

The vision to bridge the long-standing chasm between mobile and full client environments -- never mind the Windows-Mac chasm -- came as Embarcadero acquired the CodeGear technology set from Borland back in 2008. Embarcadero said it immediately set out to build C++Builder XE3 then, to allow one code effort for many more targets.
"The old way of supporting multiple platforms was not practical," said Michael Swindell, Senior Vice President of Marketing and Product Management at Embarcadero in San Francisco. That old way included highly redundant and costly development to target different platforms. The old way forced ISVs and enterprises to make guesses about which clients to target, despite an extremely dynamic market and fast-changing users preferences.

"We needed to re-organize for a multi-client world," said Swindell. He said that ISVs and developers can hedge their bets by using C++Builder XE3 now, with the knowledge the same code will be able to quickly tuned and deployed in Q3 of this year on iOS and Android.
And there are some additional synergies that should appeal to the commercial ISVs.


The common mantra behind Delphi and C++ Builder, as well as any RAD IDE, of course, is to make less code to more work fast. C++Builder XE3 takes that a big step further by applying Embarcadero's agile benefits to a common architecture supporting the major IDEs to deliver cross-client platform development on all the major targets. Full Delphi support on the new C++Builder XE3 underlying architecture comes this spring, with all the Delphi database connectivity and web services support built in.

And there are some additional synergies that should appeal to the commercial ISVs. The C++Builder XE3 architecture is already "app store ready," enabling ease in bringing the apps to Apple and Google app stores. But for enterprises, Embarcadero is also developing synergies between its AppWave capabilities and C++Builder XE3 so that enterprises too can gain a streamlined means to deploy the apps for PCs, Macs and iOS and Android uses from an AppWave app store. Expect that in the fall, said Swindell.

So the net-net on this from my perspective is that Embarcadero has primed the pump for accelerated enterprise mobile development for 2013. And, it's given developers with C and C++ skills the means to build and deploy via app stores mobile apps on-demand, via subscription models, even inside enterprises. It also means that apps can be designed with common logic and requirements and then delivered on multiple devices, so workforces can use those apps anywhere, anytime. Very powerful.

Best of mobile to enterprise

In essence, this brings what we have come to like about consumer and entrainment and web apps -- but to the workplace on all relevant platforms natively -- in a way that's not too complicated, costly or time-consuming.

I'm not seeing that in any comprehensive way from Microsoft, Apple or Google, nor from any PaaS development offerings in the market.
I would expect that PaaS-hungry providers may look to OEM or otherwise license the C++Builder XE3 technology to bring to a cloud deployment model.


And so I would expect that PaaS-hungry providers may look to OEM or otherwise license the C++Builder XE3 technology to bring to a cloud deployment model, and to better cross the PC-Mac divide, and to consolidate new apps development for all uses.

The C and C++ IDE tools and C++Builder XE3 technology, incidentally, need not only run on-premises. Embarcadero is exploring the means to make it all cloud-based, and to make tool clients using HTML5. A hybrid future for such multi-device development can't be too far off.

You may also be interested in:

GigaSpaces survey shows need for tools for fast big data, strong interest in big data in cloud

It's no surprise that most enterprises are now taking big data more seriously. But what might raise an eyebrow is how many organizations say they rely on real-time processing of big data to fuel their business, as well as the number of companies who say they're thinking about taking their big data to the cloud.

These findings come from a recent survey conducted by GigaSpaces, which asked 243 IT executives in various industries about their big data perceptions and plans. GigaSpaces, a provider of end-to-end scaling solutions for distributed application environments and an open platform-as-a-service (PaaS) stack for cloud deployment, conducted the survey online during the fall of 2012.
The first finding shows that enterprises are moving beyond collecting and storing big data and delving deeper.

Among the survey findings:
  • Some 80 percent of respondents said that big-data processing is a mission-critical function


  • More than 70 percent said their business requires processing of big data in fast -- in real time -- either in large volumes, at high velocity, or both


  • Only 20 percent of respondents said they have no plans to move their big data to the cloud, indicating a widespread readiness to consider the option
The first finding shows that enterprises are moving beyond collecting and storing big data and delving deeper. Their businesses require that they process this data in real time as events occur, be they trades on a stock exchange, alerts from security monitors, or location changes from GPS devices.

The second finding demonstrates the need for low latency and high performance in processing big data streams, as these functions are becoming mission critical and delays or dropped data can't be tolerated.

Real-time tools

GigaSpaces, which sponsored the survey, also asked survey respondents what tools they're using to process big data in real time, and here's where a gap is revealed: only 12 percent have adopted real-time event processing tools. According to GigaSpaces, this suggests that most enterprises still have not found the right solution that offers the ability to handle massive data while also providing the required speed.

"Most enterprises haven’t yet adopted these real-time event processing tools, they're managing instead with a combination of a NoSQL data store with a Hadoop processing platform," says Tsipi Erann, marketing communications manager at GigaSpaces. "It's clear that enterprises haven’t yet found the right solution that’s dedicated to real-time processing and also fits into their architecture."

As for moving big data to the cloud, survey respondents seem eager to reap the cost-savings and improved agility offered by this model. Only 20 percent of them said they have no plans to move big data applications to the cloud, while 44 percent have concrete plans or have already started this migration.

Among the 34 percent who said they were unsure about cloud deployments, primary concerns cited were scalability and security.
It's clear that enterprises haven’t yet found the right solution that’s dedicated to real-time processing and also fits into their architecture.


GigaSpaces cross-referenced answers to the question of big data's business importance with answers to the cloud question and came up with this statement: 80 percent of respondents who define their big data applications as mission critical to the business are planning or considering a move to the cloud. The company said it will use findings from this survey to help shape the direction of its offerings.

"We understand the importance of giving customers the right features and will use the input in the creation of such a solution, whether it’s integration with Hadoop or processing or transactional management," says Yaron Parasol, product manager at GigaSpaces.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in:

Wednesday, December 5, 2012

Message Bus bets its cloud-native messaging service will improve the art of email delivery


Message Bus has a pedigreed CEO, an impressive list of customers and partners, and technology that makes its cloud-based service highly scalable and resilient, yet the young company's goal is simple: help customers keep their legitimate email messages out of recipients' spam folders.

With Twitter co-founder Jeremy LaTrasse at the helm, Message Bus is navigating the often dark waters of email delivery so that its customers don't have to. The company's Global Delivery Network, launched in mid-November, aims to be to email and mobile messaging what Amazon Web Services are to cloud computing and Dropbox to cloud storage.
Currently, one in five legitimate emails is either blocked or routed to the spam folder.

The service is a cloud-native application, meaning that it's not tied to the underlying infrastructure of a single cloud service provider. Therefore Message Bus can scale and move its customers' workloads across different cloud infrastructures as needed (the company says it currently deploys on Joyent, Amazon Web Services and Rackspace cloud services). This approach avoids the scale limitations of working with a single cloud service provider, as well as the possibility of service disruption if a provider experiences an outage.

But it takes more than the right architecture to provide an effective message delivery service. Message Bus has done extensive relationship building with top ISPs including AOL, Microsoft and Google to understand what they expect from a trusted sender and sticks to those guidelines, resulting in a higher likelihood that legitimate emails make it to the inbox.

"More than 90 percent of all mail worldwide ends up in one of those places; if there’s no trust with those ISPs then the message won’t make it into the box," says LaTrasse. "So we had the idea to build best practices into the network, so everyone who sends through our service follows them. We made the relationships happen, and all our customers benefit, as well as their recipients."

Out of control

Currently, one in five legitimate emails is either blocked or routed to the spam folder, says Message Bus, making it difficult for companies relying on email as a primary driver of revenue and brand recognition to get their message across. What's more, the cost and complexity of launching messaging campaigns across multiple channels (email, mobile and social messaging, etc) is spinning out of control.

Customers of the Global Delivery Network don't need dedicated messaging hardware or personnel; instead they build a virtual SMTP bridge to send their messages across Message Bus' network. This significantly reduces upfront infrastructure costs as well as ongoing staffing, says LaTrasse, and allows customers to focus on the content of the messages, knowing that they'll be delivered in a manner that's effective, secure, and complaint.
If there’s no trust with those ISPs then the message won’t make it into the box.
At the same it unveiled the Global Delivery Network Message Bus launched a free reporting service called Discover that informs customers of email senders who may be abusing their domain name for illicit or unauthorized purposes. And late in November the company announced an enhancement to its service with the deployment of Opscode's Hosted Chef to automate configuration, environment and application management across the multiple cloud infrastructures powering the company's service.

Message Bus lists American Greetings, MyFitnessPal, and Telly among its early users.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn at http://linkd.in/T6trhH.)

You may also be interested in:

 

Thursday, November 29, 2012

New strategies now needed to simplify data backup and protection in complex enterprise IT environments

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Quest Software.

The latest BriefingsDirect IT trends discussion targets enterprise backup, why it’s broken, and how to fix it.

Nowadays the backup of enterprise information and associated data protection are fragmented, complex, and inefficient. But new approaches are helping to simplify the data-protection process, keep costs in check, and improve recovery speed and confidence.

Joining us to share insights on how data protection became such a mess -- and how new techniques are being adopted to gain comprehensive and standard control over the data lifecycle -- are John Maxwell, Vice President of Product Management for Data Protection at Quest Software, now part of Dell, and George Crump, Founder and Lead Analyst at Storage Switzerland, an analyst firm focused on the storage market. The chat is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.  [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why has something seemingly as straightforward as backup become so fragmented and disorganized?

Maxwell: Dana, I think it’s a perfect storm, to use an overused cliché. If you look back 20 years ago, we had heterogeneous environments, but they were much simpler. There were NetWare and UNIX, and there was this new thing called Windows. Virtualization didn’t even really exist. We backed up data to tape, and a lot of data was in terabytes, not petabytes.

Flash forward to 2012, and there’s more heterogeneity than ever. You have stalwart databases like Microsoft SQL Server and Oracle, but then you have new apps being built on MySQL. You now have virtualization, and, in fact, we're at the point this year where we're surpassing the 50 percent mark on the number of servers worldwide that are virtualized.
John Maxwell

Now we're even starting to see people running multiple hypervisors, so it’s not even just one virtualization platform anymore, either. So the environment has gotten bigger, much bigger than we ever thought it could or would. We have numerous customers today that have data measured in petabytes, and we have a lot more applications to deal with.

And last, but not least, we now have more data that’s deemed mission critical, and by mission critical, I mean data that has to be recovered in less than an hour. Surveys 10 years ago showed that in a typical IT environment, 10 percent of the data was mission critical. Today, surveys show that it’s 50 percent and more.

Crump: I would dovetail into what he just mentioned about mission criticality. There are definitely more platforms, and that’s a challenge, but the expectation of the user is just higher. The term I use for it is IT is getting "Facebooked."

High expectations

I've had many IT guys say to me, "One of the common responses I get from my users is, 'My Facebook account is never down.'" So there is this really high expectation on availability, returning data, and things of that nature that probably isn’t really fair, but it’s reality.

One of the reasons that more data is getting classified as mission critical is just that the expectation that everything will be around forever is much higher.

George Crump
The other thing that we forget sometimes is that the backup process, especially a network backup, probably unlike any other, stresses every single component in the infrastructure. You're pulling data off of a local storage device on a server, it’s going through that server CPU and memory, it’s going down a network card, down a network cable, to a switch, to another card, into some sort of storage device, be it disk or tape.

So there are 15 things that happen in a backup and all 15 things have to go flawlessly. If one thing is broken, the backup fails, and, of course, it’s the IT guy’s fault. It’s just a complex environment, and I don’t know of another process that pushes on all aspects of the environment in one fell swoop like backup does.

Gardner: So the stakes are higher, the expectations are higher, the scale and volume and heterogeneity are all increased. What does this mean, John, for those that are tasked with managing this, or trying to get a handle on it as a process, rather than a technology-by-technology approach?

Maxwell: There are two issues here. One, you expect today's storage administrator, or sysadmin, to be a database administrator (DBA), a VMware administrator, a UNIX sysadmin, and a Windows admin. That’s a lot of responsibility, but that’s the fact.

A lot of people think that they are going to have as deep level of knowledge on how to recover a Windows server as they would an Oracle database. That’s just not the case, and it's the same thing from a product perspective, from a technology perspective.
Is there really such thing as a backup product, the Swiss Army knife, that does the best of everything? Probably not.

Is there really such thing as a backup product, the Swiss Army knife, that does the best of everything? Probably not, because being the best of everything means different things to different accounts. It means one thing for the small to medium-size business (SMB), and it could mean something altogether different for the enterprise.

We've now gotten into a situation where we have the typical IT environment using multiple backup products that, in most cases, have nothing in common. They have a lot of hands in the pot trying to manage data protection and restore data, and it has become a tangled mess.

Gardner: Before we dive a little bit deeper into some of these major areas, I'd like to just visit another issue that’s very top of mind for many organizations, and that’s security, compliance, and business continuity types of issues, risk mitigation issues. George Crump, how important is that to consider, when you look at taking more of a comprehensive or a holistic view of this backup and data-protection issue?

Disclosure laws

Crump: It's a really critical issue, and there are two ramifications. Probably the one that strikes fear in the heart of every CEO on the planet is all the disclosure laws that exist now that say that, when you lose a customer’s data, you have to let him know. Unfortunately, probably the only effective way to do that is to let everybody know.

I'm sure everybody listening to this podcast has gotten more than one letter already this year saying their Social Security number has been exposed, things like that. I can think of three or four I've already gotten this year.

So there is the downside of legally having to admit you made a mistake, and then there is the legal requirements of retaining information in case of a lawsuit. The traditional thing was that if I got a discovery motion filed against me, I needed to be able to pull this information back, and that was one motivator. But the bigger motivator is having to disclose that we did lose data.

And there's a new one coming in. We're hearing about big data, analytics, and things like that. All of that is based on being able to access old information in some form, pull it back from something, and be able to analyze it.

That is leading many, many organizations to not delete anything. If you don't delete anything, how do you store it? A disk-only type of solution forever, as an example, is a pretty expensive solution. I know disk has gotten a lot cheaper, but forever, that’s a really long time to keep the lights on, so to speak.
We need to step back, take inventory of what we've got, and choose the right solution to solve the problem at hand, whether you're an SMB or an enterprise.

Gardner: Let's look at this a bit more from the problem-solution perspective. We have multiple platforms, we have operating systems, hypervisors, application types, even appliances. What's the solution?

Maxwell: The problem is we need to step back, take inventory of what we've got, and choose the right solution to solve the problem at hand, whether you're an SMB or an enterprise.

But the biggest thing we have to address is, with the amount and complexity of the data, how can we make sysadmins, storage administrators, and DBAs productive, and how can we get them all on the same page? Why do each one of these roles in IT have to use different products?

George and I were talking earlier. One of the things that he brought up was that in a lot of companies, data is getting backed up over and over by the DBA, the VMware administrator, and the storage administrator, which is really inefficient. We have to look at a holistic approach, and that may not be one-size-fits-all. It may be choosing the right solutions, yet providing a centered means for administration, reporting, monitoring, etc.

Gardner: Is there anything different and specific about backup that makes this even harder to move from that point solution, best-of-breed mentality, into more of a comprehensive process standardization approach?

Demands and requirements

Crump: It really ties into what John said. Every line of business is going to have its own demands and requirements. To expect not even a backup administrator, but an Oracle administrator that’s managing an Oracle database for a line of business, to understand the nuances of that business and how they want to keep things is a lot to ask.

When backup is broken, the default survival mechanism is to throw everything out, buy the latest enterprise solution, put the stake in the ground, and force everybody to centralize on that one item. That works to a degree, but in every project we've been involved with, there are always three or four exceptions. That means it really didn’t work. You didn't really centralize.

Then there are covert operations of backups happening, where people are backing up data and not telling anybody, because they still don't trust the enterprise application. Eventually, something new comes out. The most immediate example is virtualization, which spawned the birth of several different virtualized specific applications. So bringing all that back in again becomes very difficult.

I agree with John. What you need to do is give the users the tools they want. Users are too sophisticated now for you to say, "This is where we are going to back it up and you've got to live with it." They're just not going to put up with that anymore. It won't work.

So give them the tools that they want. Centralize the process, but not the actual software. I think that's really the way to go.

Gardner: So we recognize that one size fits all probably isn’t going to apply here. We're going to have multiple point solutions. That means integration at some level or multiple levels. That brings us to our next major topic. How do we integrate well without compounding the complexity and the problems set? John?
We’re keenly interested in leveraging those technologies for the DBAs and sysadmins in ways that make their lives easier and make sure they are more productive.

Maxwell: We've been working on this now for almost two years here at Quest, and now at Dell, and we are launching in November, something called NetVault XA. “XA” stands for Extended Architecture. We have a portfolio of very rich products that span the SMBs and the enterprise, with focus on virtual backup, heterogeneous backup, instantaneous snapshots and deep application recovery, and we’re keenly interested in leveraging those technologies for the DBAs and sysadmins in ways that make their lives easier and make sure they are more productive.

NetVault XA solves some really big issues. First of all, it unifies the user experience across products, and by user, I mean the sysadmin, the DBA, and the storage administrator, across products. The initial release of NetVault XA will support both our vRanger and NetVault Backup, as well as our NetVault SmartDisk product, and next year, we'll be adding even more of our products under NetVault XA as well.

So now we've provided a common means of administration. We have one UI. You don’t have to learn something different. Everyone can work on the same product, yet based on your login ID, you will have access to different things, whether it's data or capabilities, such as restoring an Oracle or SQL Server database, or restoring a virtual machine (VM).

That's a common UI. A lot of vendors right now have a lot of solutions, but they look like they're from three, four, or five different companies. We want to provide a singular user experience, but that's just really the icing on the cake with NetVault XA.

If we go down a little deeper into NetVault XA, once it’s is installed, learning alongside vRanger, NetVault, or both, it's going to self identify that vRanger or NetVault environment, and it's going to allow you to manage it the way that you have already set about from that ability.

New approach

We're really delivering a new approach here, one we think is going to be unique in the industry. That's the ability to logically group data and applications within lines of business.

You gave an example earlier of Oracle. Oracle is not an application. Oracle is a platform for applications, and sometimes applications span databases, file systems, and multiple servers. You need to be looking at that from a holistic level, meaning what makes up application A, what makes up application B, C, D, etc.?

Then, what are the service levels for those applications? How mission critical are they? Are they in that 50 percent of data that we've seen from surveys, or are they data that we restored from a week ago? It wouldn’t matter, but then, again, it's having one tool that everyone can use. So you now have a whole different user experience and you're taking up a whole different approach to data protection.

Gardner: There really seems to be a drilling down into these technologies and surfacing information to such a degree that it strikes me as similar to what IT service management (ITSM) did for managing IT systems at a higher level. We're now bringing that to a discrete portion backup and recovery. Does that sound about right, George, or did I overstate it?
We're really delivering a new approach here, one we think is going to be unique in the industry. That's the ability to logically group data and applications within lines of business.

Crump: No, that's dead-on. The benefits of that type of architecture are going to be substantial. Imagine if you are the vRanger programmer, when all this started. Instead of having to write half of the backend, you could just plug into a framework that already existed and then focus most of your attention on the particular application or environment that you are going to protect.

You can be releasing the equivalent of vRanger 6 on vRanger 1, because you wouldn’t have to go write this backend that already existed. Also, if you think about it, you end up with a much more reliable software product, because now you're building on a library class that will have been well tested and proven.

Say you want to implement deduplication in a new version of the product or a new product. Instead of having to rewrite your own deduplication engine, just leverage the engine that's already there.

One common means

Maxwell: By having one common means -- whether you're a DBA, a sysadmin, a VMware administrator, or a storage administrator -- you are all on the same page. You can have people all buying into one way of doing things, so we don't have this data being backed up two or three times.

But the other thing that you get, and this is a big issue now, is protecting multiple sites. When we talk about multiple sites, people sometimes say, "You mean multiple data centers. What about all those remote office branch offices?" That right now is a big issue that we see customers running into.

The beauty of NetVault XA is I can now have various solutions implemented, whether it's vRanger running remotely or NetVault in a branch office, and I can be managing it. I can manage all aspects of it to make sure that those backups are running properly, or make sure replication is working properly. It could be halfway around the country or halfway around the world, and this way we have consistency.

Speaking of reporting, as you said earlier, what about a dashboard for management? One of our early users of NetVault XA is a large multinational company with 18 data centers and 250,000 servers. They have had to dedicate people to write service-level reports for their backups. Now, with NetVault XA, they can literally give their IT management, meaning their CIO and their CTOs, login IDs to NetVault XA, and they can see a dashboard that’s been color coded.

It can say, "Well, everything is green, so everything is protected," whether it's the Linux servers, Oracle databases, Exchange email, whatever the case. So by being able to reduce that level of complexity into a single pane of glass -- I know it's a cliché, but it really is -- it's really very powerful for large organizations and small.
I can manage all aspects of it to make sure that those backups are running properly, or make sure replication is working properly.

Even if you have two or three locations and you're only 500 employees, wouldn’t it be nice to have the ability to look at your backups, your replicas, and your snapshots, whether they're in the data center or in branch offices, and whether you're a sysadmin, DBA, storage administrator, to be using one common interface and one common set of rules to all basically all get on the same plane?

Dispersed operations

So it's having a means to take an inventory and ensure that the servers are being maintained, that everything is being protected, because next to your employees, your data is the most important asset that you have.

Data is everywhere now. It’s in mobile devices. It certainly could be in cloud-based apps. That's one of the things that we didn’t talk about. At Quest we use seven software-as-a-service (SaaS)-based applications, meaning they're big parts, whether it's Salesforce.com or our helpdesk systems, or even Office 365. This is mission-critical corporate data that doesn’t run in our own data center. How am I protecting that? Am I even cognizant of it?

The cloud has made things even more interesting, just as virtualization has made it more interesting over the past couple of years. With NetVault XA, we give you that one single pane of glass with which you can report, analyze, and manage all of your data.

Mobile devices

Gardner: Just to be clear John, this console is something you can view as a web interface, and I'm assuming therefore also through mobile devices. I'm going to guess that at some point, there will perhaps be even a more native application for some of the prominent mobile platforms.

Maxwell: It’s funny that you mentioned that. This is an HTML5-based application. So it's very new, very fresh, and very graphical. If you look at the UI, it was designed with tablets and laptops in mind. It's gotten to where you can do controls with your thumbs, assuming you're running this on a tablet.

In-house, and with early support customers, you can log into this remotely via laptops, or tablet computing. We even have some people using them on mobile phones, even though we're not quite there yet. I'm talking about the form factor of how the screens light up, but we will definitely be going that way. So a sysadmin or storage administrator can have at their fingertips the status of what’s going on in the data-protection environment.

What's nice is because this is a thin client, a web UI, you can define user IDs not only for the sysadmins and DBAs and storage administrators, but like I said earlier, IT management.

So if your boss, or your boss’ boss, wants to dial in and see the health of things, how much data you’re protecting, how much data is being replicated, what data is being protected up in the cloud, which is on-prem, all of that sort of stuff, they can now have a dashboard approach to seeing it all. That’s going to make everyone more productive, and it's going to give them a better sense that this data is being protected, and they can sleep at night.
If you don’t have a way to manage and see all of your data protection assets, it's really just a lot of talk.

Gardner: Is there anything here going forward that will make having a process approach to a data lifecycle and backup and recovery even more important?

Maxwell: Dana, you hit on something that's really near and dear to my heart, which is data deduplication. We have a very broad strategy. We offer our own software-based dedupe. We support every major hardware based dedupe appliance out there, and we're now adding support for Dell’s DR Series, DR4000 dedupe appliances. But we're still very much committed to tape, and we're building initiatives based on storing data in the cloud and backing up, replicating, failover, and so forth.

One of the things that we built into NetVault XA that's separate from the policy management and online monitoring is that we now have historical data. This is going to give you the ability to do some capacity management and capacity planning and see what the utilization is.

How much storage are your backups taking? What's the most optimum number of generations? Where are you keeping that data? Is some data being kept too long? Is some data not being kept long enough?
For every ounce of flexibility, it feels like we have added two ounces of complexity, and it's something we just can't afford to deal with.

By offering a broad strategy that says we support a plethora of backup targets, whether it's tape, special-purpose backup appliances, software-based dedupe, or even the cloud, we're giving customers flexibility, because they have unique needs and they have different needs, based on service levels or budgets. We want to make them flexible, because, going back to our original discussion, one size doesn’t fit all.

Crump: Just to tie in with what John said, we need flexibility that doesn’t add complexity. Almost everything we've done so far in the environment up to now, has added flexibility, but also, for every ounce of flexibility, it feels like we have added two ounces of complexity, and it's something we just can't afford to deal with. So that's really the key thing.

Looking forward, at least on the horizon, I don't see a big shift, something like virtualization that we need to be overly concerned with. What I do see is the virtual environment becoming more and more challenging, as we stack more and more VMs on it. The amount of I/O and the amount of data protection process that will surround every host is going to continue to increase. So the time is now to really get the bull by the horns and institute a process that will scale with the business long-term.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Quest Software.

You may also be interested in: 

Wednesday, November 28, 2012

HP BSM software newly harnesses big-data analysis to better predict, prevent, and respond to IT issues

HP this week announced a new version of its HP Business Service Management (BSM) software to endow IT organizations with big data analysis capabilities across mobile, hybrid, and cloud IT environments.The goal: To significantly improve the performance and availability of software services.

As organizations have adopted virtualization and cloud technologies, the complexity to effectively monitor trouble across these systems has skyrocketed. And, with the rise of shared services, IT no longer knows or controls all the technologies supporting their businesses.

So HP has broadened its BSM solutions to deliver better end-to-end visibility into IT applications and services  by exploiting powerful, real-time and historical analytics. With enhanced BSM, IT can anticipate performance and trouble issues before they happen. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

“IT organizations are looking for new ways to deliver predictable service levels," said Ajei Gopal, senior vice president and general manager, Hybrid and Cloud Products, Software at HP. “The new HP Business Service Management software delivers end-to-end operational intelligence to help IT make better decisions and improve service levels in complex, dynamic IT environments.”

Operational analytics

New to HP BSM is HP Operational Analytics (OpsAnalytics), a capability that delivers ongoing intelligence about the health of IT services by automating the correlation and analysis of consolidated data, including reams of machine data, logs, events, topology, and performance information.

OpsAnalytics is enabled through the integration of HP ArcSight Logger, a universal log management solution, with correlation capabilities of HP Operations Manager i (OMi), and the predictive analytics of HP Service Health Analyzer (SHA). This combination delivers deep visibility and insight into nearly any performance or availability issue, so, says HP, IT operators can:
  • Remediate known problems before they occur with predictive analytics that forecast problems and prioritize issues based on business impact
  • Proactively solve unknown issues by collecting, storing, and analyzing IT operational data to automatically correlate service abnormalities with the problem source
  • Resolve incidents faster with knowledge based on historical analysis of prior similar events that contains search capabilities across logs and events.
HP BSM further helps clients maximize IT investments with end-to-end visibility across heterogeneous environments, enabling clients to:
  • Ensure service availability with a 360-degree view of IT performance, gathered by aggregating data from disparate sources into a single dashboard using out-of-the-box connectors to a range of management frameworks, including IBM Tivoli Enterprise Console and IBM Tivoli Monitoring and Microsoft System Center
  • Resolve and improve performance of applications running in OpenStack and Python cloud environments with diagnostics that pinpoint performance bottlenecks
  • Improve availability of web and mobile applications through greater insight into client side performance issues.
HP also enables virtualization administrators to diagnose and troubleshoot performance bottlenecks in highly virtualized environments with HP Virtualization Performance Viewer (vPV), which helps reduce operational resources by up to 70 percent and decrease time to problem resolution by up to 50 percent, and is available as a free download, said HP.

The free versions of HP Virtualization Performance Viewer (vPV) and HP ArcSight Logger are available to download from www.hp.com/go/vpv and www.hp.com/go/opsanalytics respectively.

You may also be interested in: