Tuesday, January 7, 2014

Inside story on how HP implemented the TippingPoint intrusion prevention system across its own security infrastructure

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

The high cost of unwanted intrusion and malware across corporate networks is well known. Less talked-about are the successful ways that organizations are thwarting ongoing, adaptive and often-insider-driven security breaches.

Companies are understandably reluctant to readily discuss either their defenses or mishaps. Yet HP, one of the world's largest companies, is both a provider and a practitioner of enterprise intrusion prevention systems (IPS). And so we asked HP to explain how it is both building and using such technologies, along with seeking some insider tips on best practices.

And so the next edition of the HP Discover Podcast Series explores the ins and outs of improving enterprise intrusion prevention. We learn how HP and its global cyber security partners have made the HP Global Network more resilient and safe. We also gain more insight into HP's vision for security and learn how that has been effectively translated into actual implementation.

The inside story comes from Jim O'Shea, Network Security Architect for HP Cyber Security Strategy and Infrastructure Engagement. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: What are some of the major trends that are driving the need for better intrusion detection and prevention nowadays?

O’Shea: If you look at the past, you had reaction technologies. We had firewalls that blocked and looked at the port level. Then we evolved to trying to detect things that were malicious with intent by using IPS. But that was still a reactionary-type thing. It was a nice approach, but we were reacting. But if you knew it was bad, why did we let it in in the first place?

The evolution was in the IPS, to the prevention. If you know it's bad, why do you even want to see it? Why do you want to try to react to it? Just block it early. That’s the trend that we’ve been following.

Gardner: But we can’t just have a black-and-white entry. We want access control, rather than just a firewall. So is there a new thinking, a new vision, that’s been developed over the past several years about these networks and what should or shouldn't be allowed through them?

O’Shea: You’re talking about letting the good in. Those are the evolutions and the trends that we are all trying to strive for. Let the good traffic in. Let who you are be a guide. Maybe look at what you have. You can also explore the health of your device. Those are all trends that we’re all striving for now.

Gardner: I recall Jim, that there was a Ponemon Institute report about a year or so ago that really outlined some of the issues here.

Number of attacks

O’Shea: The Ponemon study was illustrating the vast number of attacks and the trend toward the costs for intrusion. It was highlighting those type of trends, all of which we’re trying to head off. Those type of reports are guiding factors in taking a more proactive, automated-type response. [Learn more about intrusion prevention systems.]

Gardner: I suppose what’s also different nowadays is that we’re not only concerned with outside issues in terms of risk, but also insider attacks.

O’Shea: You’re exactly right. Are you hiring the right people? That’s a big issue. Are they being influenced? Those are all huge issues. Big data can handle some of that and pull that in. Our approach on intrusion detection isn’t to just look at what’s coming from the outside, but also look at all data traversing the network.
You have a whole rogue wireless-type approach in which people can gain access and can they probe and poke around.

When we deployed the TippingPoint solution, we didn’t change our policies or profiles that we were deploying based on whether it’s starting on the inside or starting on the outside. It was an equal deployment.

An insider attack could also be somebody who walks into a facility, gains physical access, and connects to your network. You have a whole rogue wireless-type approach in which people can gain access and can they probe and poke around. And if it’s malware traffic from our perspective, with the IDS we took the approach, inside or outside -- doesn’t matter. If we can detect it, if we can be in the path, it’s a block.

TippingPoint technology is an appliance-based technology. It’s an inline device. We deploy it inline. It sits in the network, and the traffic is flowing through it. It’s looking for characteristics or reputation on the type of traffic, and reputation is a more real-time change in the system. This network, IP address, or URL is known for malware, etc. That’s a dynamic update, but the static updates are signature-type, and the detection of vulnerability or a specific exploit aimed at an operating system.
So intrusion prevention is through the detection of that, and blocking and preventing that from completing its communication to the end node.

Bigger picture

All the events get logged into HP ArcSight to create the bigger picture. Are you seeing these type of events occurring other places? So you have the bigger picture correlation.

Network-based anomaly detection is the ability to detect something that is occurring in the network and it's based on an IP address or it’s based on a flow. Taking advantage of reputation we can insert those IP addresses, detected based on flow, that are doing something anomalous.

It could be that they’re beaconing out, spreading a worm. If they look like they’re causing concerns with a high degree of accuracy, then we can put that into the reputation and take advantage of moving blocks.

So reputation is a self-deploying feature. You insert an IP address into it and it can self-update. We haven’t taken the automated step yet, although that’s in the plan. Today, it’s a manual process for us, but ideally, through application programming interfaces (APIs), we can automate all that. It works in a lab, but we haven’t deployed it on our production that way.

Gardner: Clearly HP is a good example of a large enterprise, one of the largest in the world, with global presence, with a lot of technology, a lot of intellectual property, and therefore a lot to protect. Let’s look at how you actually approached protecting the HP network.
We wanted to prevent mal traffic, mal-formed traffic, malware -- any traffic with the mal intent of reaching the data center.

What’s the vision, if you will, for HP's Global Cyber Security, when it comes to these newer approaches? Do you have an overarching vision that then you can implement? How do we begin to think about chunking out the problem in order to then solve it effectively?

O’Shea: You must be able to detect, block, and prevent as an overarching strategy. We also wanted to take advantage of inserting a giant filter inline on all data that’s going into the data center. We wanted to prevent mal traffic, mal-formed traffic, malware -- any traffic with the "mal" intent of reaching the data center.

So why make that an application decision to block and rely on host-level defenses, when we have the opportunity to do it at the network? So it made the network more hygienically clean, blocking traffic that you don’t want to see.

We wrapped it around the data center, so all traffic going into our data centers goes through that type of filter. [Learn more about intrusion prevention systems.]

Key to deployment

Because this is all an inline technology, and you are going inline in the network, you’re changing flows. It could be mal traffic, but yet maybe a researcher is trying to do something. So we need to have the ability to have that level of partnership with the network team. They have to see it. They have to understand what it is. It has to be manageable.

When we deployed it, we looked at what could go wrong and we designed around that. What could go wrong? A device failed. So we have an N+1 type installation. If a single device fails, we’re not down, we are not blocking traffic. We have the ability to handle the capacity of our network, which grows, and we are growing, and so it has to be built for the now and the future. It has to be manageable.

It has to be able to be understood by “first responders,” the people that get called first. Everybody blames the network first, and then it's the application afterward. So the network team gets pulled in on many calls, at all types of hours, and they have to be able to get that view.

That was key to get them broad-based training, so that the technology was there. Get a process integrated into how you’re going to handle updates and how you’re going to add beyond what TippingPoint recommended. TippingPoint makes a recommendation on profiles and new settings. If we take that, do we want to add other things? So we have to have a global cyber-security view and a global cyber-security input and have that all vetted.

The application team had to be onboard and aware, so that everybody understands. Finally, because we were going into a very large installed network that was handling a lot of different types of traffic, we brought in TippingPoint Professional Services and had everything looked at, re-looked at, and signed off on, so that what we’re doing is a best practice. We looked at it from multiple angles and took a lot of things into consideration.
We proxy the events. That gives us the ability to have multiple ArcSight instances and also to evolve.

Gardner: Is there something about TippingPoint and ArcSight that provides data, views, and analytics in such a way that it's easier for these groups to work together in ways that they hadn’t before?

O’Shea: One of the nice things about the way the TippingPoint events occur is that you have a choice. You can send them from an individual IDS units themselves or you can proxy them from the management console. Again, the ability to manage was critical to us, so we chose to do it from the console.

We proxy the events. That gives us the ability to have multiple ArcSight instances and also to evolve. ArcSight evolves. When they’re changing, evolving, and growing, and they want to bring up a new collector, we’re able to send very rapidly to the new collector.

ArcSight pulls in firewall logs. You can get proxy events and events from antivirus. You can pull in that whole view and get a bigger picture at the ArcSight console. The TippingPoint view is of what’s happening from the inline TippingPoint and what's traversing it. Then, the ArcSight view adds a lot of depth to that.

Very flexible

So it gives a very broad picture, but from the TippingPoint view, we’re very flexible and able to add and stay in step with ArcSight growth quickly. It's kind of a concert. That includes sending events on different ports. You’re not restricted to one port. If you want to create a secure port or a unique port for your events to go on to ArcSight, you have that ability.

After the deployment we’ve had some DoS attacks against us, and they have been blocked and deflected. We’ve had some other events that we have been able to block and defend rapidly. [Learn more about intrusion prevention systems.]
If you think back historically of how we dealt with them, those were kind of a Whac-A-Mole-type of defenses. Something happened, and you reacted. So I guess the metric would be that we’re not as reactionary, but do we have hard metrics to prove that? I don’t have those.

How much volume?

Gardner: We can appreciate the scale of what the systems are capable of. Do we have a number of events detected or that sort of thing, blocks per month, any sense of how much volume we can handle?

O’Shea: We took a month’s sample. I’m trying to recall the exact number, but it was 100 million events in one month that were detected as mal events. That’s including Internet-facing events. That’s why the volume is high, but it was 100 million events that were automatically blocked and that were flagged as mal events.
The Professional Services teams have been able to deploy in a very large network and have worked with the requirements that a large enterprise has.

The Professional Services teams have been able to deploy in a very large network and have worked with the requirements that a large enterprise has. That includes standard deployment, how things are connected and what the drawings are going to look like, as well as how are you going to cable it up.

A large enterprise has different standards than a small business would have, and that was a give back to the Professional Services to be able to deploy it in a large enterprise. It has been a good relationship, and there is always opportunity for improvement, but it certainly has helped.

Current trends

Gardner: Jim, looking to the future a little bit, we know that there’s going to be more and more cloud and hybrid-cloud types of activities. We’re certainly seeing already a huge uptick in mobile device and tablet use on corporate networks. This is also part of the bring-your-own-device (BYOD) trend that we’re seeing.

So should we expect a higher degree of risk and more variables and complication, and what does that portend for the use of these types of technologies going forward? How much gain do you get by getting on the IDS bandwagon sooner rather than later?

O’Shea: BYOD is a new twist on things and it means something different to everybody, because it's an acronym term, but let's take the view of you bringing in a product you buy.
BYOD is a new twist on things and it means something different to everybody, because it's an acronym term.

Somebody is always going to get a new device, they are going to bring in it, they are going to try it out, and they are going to connect it to the corporate network, if they can. And because they are coming from a different environment and they’re not necessarily to corporate standards, they may bring unwanted guests into the network, in terms of malware.

Now, we have the opportunity, because we are inline, to detect and block that right away. Because we are an integrated ecosystem, they will show up as anomalous events. ArcSight and our Cyber Defense Center will be able to see those events. So you get a bigger picture.

Those events can be then translated into removing that node from the network. We have that opportunity to do that. BYOD not only brings your own device, but it also brings things you don’t know that are going to happen, and the only way to block that is prevention and anomalous type detection, and then try to bring it altogether in a bigger picture.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.
Sponsor: HP. Learn more about intrusion prevention systems.

You may also be interested in:

Thursday, December 12, 2013

Healthcare turns to big data analytics platforms to gain insight and awareness for improved patient outcomes

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Analytics platforms and new healthcare-specific solutions together are offering far greater insight and intelligence into how healthcare providers are managing patient care, cost, and outcomes.

Based on a number of offerings announced this week at the HP Discover Conference in Barcelona, an ecosystem of solutions are emerging to give hospitals and care providers new data-driven advantages as they seek to transform their organizations.

To learn how, BriefingsDirect sat down with Patrick Kelly, Senior Practice Manager at the Avnet Services Healthcare Practice, and Paul Muller, Chief Software Evangelist at HP, to examine the impact that big-data technologies and solutions are having on the highly dynamic healthcare industry. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: How closely are you seeing an intersection between big data and the need for analytics in healthcare?

Muller: It's undoubtedly a global trend, Dana. One statistic that sticks in my mind is that in 2012 what was estimated was approximately 500 petabytes of digital healthcare data across the globe. That’s expected to reach 25,000 petabytes by the year 2020. So, that’s a 50-times increase in the amount of digital healthcare data that we expect to be retaining.
Muller

The reasons for that is simply that having better data helps us drive better healthcare outcomes. And we can do it in a number of different ways. We move to what we call most evidence-based medicines, rather than subjecting people to a battery of tests, or following a script, if you like.

The test or the activities that are undertaken with each individual are more clearly tailored, based on the symptoms that they’re presenting with, and data helps us make some of those decisions.

Basic medical research

The other element of it is that we’re now starting to bring in more people and engage more people in basic medical research. For example, in the US, the Veterans Administration has a voluntary program that’s using blood sample and health information from various military veterans. Over 150,000 have enrolled to help give us a better understanding of healthcare.

We’ve had similar programs in Iceland and other countries where we were using long-term healthcare and statistical data from the population to help us spot and address healthcare challenges before they become real problems.

The other, of course, is how we better manage healthcare data. A lot of our listeners, I’m sure, live in countries where electronic healthcare records (EHR) are a hot topic. Either there is a project under way or you may already have them, but that whole process of establishing them and making sure that those records are interchangeable is absolutely critical.

Then, of course, we have the opportunity of utilizing publicly available data. We’ve all heard of Google being utilized to identify the outbreaks of flu in various countries based on the frequency of which people search for flu symptoms.
There’s a huge array of data that you need to bring together, in addition to just thinking about the size of it.

So, there’s definitely a huge number of opportunities coming from data. The challenge that we’ll find so frequently is that when we talk about big data, it's critical not just to talk about the size of the data we collect, but the variety of data. You’ve got things like structured EHR. You have unstructured clinical notes. If you’ve ever seen a doctor’s scribble, you know what I’m talking about.

You have medical imaging data, genetic data, and epidemiological data. There’s a huge array of data that you need to bring together, in addition to just thinking what is the size of it. Of course, overarching all of these are the regulatory and privacy issues that we have to deal with. It's a rich and fascinating topic.

Gardner: Patrick Kelly, tell us a little bit about what you see as the driving need technically to get a handle on this vast ocean of healthcare data and the huge potential for making good use of it.

Kelly: It really is a problem of how to deal with a deluge of data. Also, there’s a great change that’s being undertaken because of the Affordable Care Act (ACA) legislation and that’s impacting not only the business model, but also the need to switch to an electronic medical record.

Capturing data

From an EHR perspective to date, IT is focused on capturing that data. They take and then transpose what’s on a medical record into an electronic format. Unfortunately, where we’ve fallen short in helping the business is taking that data that’s captured and making it useful and meaningful in analytics and helping the business to gain visibility and be able to pivot and change as the need to change the business model is being brought to bear on the industry.

Gardner: For those of our audience who are not familiar with Avnet, please describe your organization. You’ve been involved with a number of different activities, but healthcare seems to be pretty prominent in the group now. [Learn more about Avnet's Healthcare Analytics Practice.]

Kelly
Kelly: Avnet has made a pretty significant investment over the last 24 months to bolster the services side of the world. We’ve brought numbers up to around 2,000 new personnel on board to focus on everything in the ecosystem, from -- as we’re talking about today -- healthcare all the way up to hardware, educational services, and supporting partners like HP. We happen to be HP’s largest enterprise distributor. We also have a number of critical channel partners.

In the last eight months, we came together and brought on board a number of individuals who have deep expertise in healthcare and security. They work to focus on building out healthcare practice that not only provides services, but is also developing kind of a healthcare analytics platform.

Gardner: Paul Muller, you can’t buy healthcare analytics in a box. This is really a team sport; an ecosystem approach. Tell me a little bit about what Avnet is, how important they are in HP’s role, and, of course, there are going to be more players as well.
What Avnet brings to the table is the understanding of the HAVEn technology, combined with deep expertise in the area of healthcare and analytics.

Muller: The listeners would have heard from the HP Discover announcements over the last couple of days that Avnet and HP have come together around what we call the HAVEn platform. HAVEn as we might have talked about previously on the show stands for Hadoop, Autonomy, Vertica, Enterprise Security, with the “n” being any number of apps. [Learn more about the HAVEn platform.]

The "n" or any numbers of apps is really where we work together with our partners to utilize the platform, to build better big-data enabled applications. That’s really the critical capability our partners have.

What Avnet brings to the table is the understanding of the HAVEn technology, combined with deep expertise in the area of healthcare and analytics. Combining that, we've created this fantastic new capability that we’re here to talk about now.

Gardner: What are the top problems that need to be solved in order to get healthcare information and analytics to the right people in a speedy fashion?

Kelly: If we pull back the covers and look at some of the problems or challenges around advancing analytics and modernization into healthcare, it’s really in a couple of areas. One of them is that it's a pretty big cultural change.

Significant load

Right now, we have an overtaxed IT department that’s struggling to bring electronic medical records online and to also deal with a lot of different compliance things around ICD-10 and still meet meaningful use. So, that’s a pretty significant load on those guys.

Now, they’re being asked to look at delivering information to the business side of the world. And right now, there's not a good understanding, from an enterprise-wide view, of how to use analytics in healthcare very well.

So, part of the challenge is governance and strategy and looking at an enterprise-wide road map to how you get there. From a technology perspective, there’s a whole problem around industry readiness. There are a lot of legacy systems floating around that can range from 30-year-old mainframes up to more modern systems. So there’s a great deal of work that has to go around modernizing the systems and then tying them together. That all leads to problems with data logistics and fragmentation and really just equals cost and complexity.

One of the traditional approaches that other industries have followed with enterprise data warehouses and traditional extract, transform, load (ETL) approaches are just too costly, too slow, and too difficult for healthcare system to leverage. Finally, there are a lot of challenges in the process of the workflow.

Muller: The impact on patient outcomes is pretty dramatic. One statistic that sticks in my head is that hospitalizations in the U.S. are estimated to account for about 30 percent of the trillions of dollars in annual cost of healthcare, with around 20 percent of all hospital admissions occurring within 30 days of a previous discharge.
Better utilizing big-data technology can have a very real impact on the healthcare outcomes of your loved ones.

In other words, we’re potentially letting people go without having completely resolved their issues. Better utilizing big-data technology can have a very real impact, for example, on the healthcare outcomes of your loved ones. Any other thoughts around that, Patrick?

Kelly: Paul, you hit a really critical note around re-admissions, something that, as you mentioned, has a real impact on the outcomes of patients. It's also a cost driver. Reimbursement rates are being reduced because of failure. Hospitals would be able to address the shortfalls either in education or follow-up care that end up landing patients back in the ER.

You’re dead on with re-admissions, and from a big-data perspective, there are two stages to look at. There’s a retrospective look that is a challenge even though it's not a traditional big-data challenge. There’s still lot of data and a lot of elements to look into just to identify patients that have been readmitted and track those.

But the more exciting and interesting part to this is the predictive, looking forward and seeing the patient’s conditions, their co-morbidity, how sick they are, what kind of treatment they receive, what kind of education they received and the follow-up care as well as how they behave in the outside world. Then, it’s bringing all that together and building a model to be able to determine whether this person is at risk to readmit. If so, how do we target care to them to help reduce that risk. 

Gardner: We certainly have some technology issues to resolve and some cultural shifts to make, but what are the goals in the medical field, in the provider organizations themselves? I’m thinking of such things as cutting cost, but more that, things about treatments and experience and even gaining perhaps a holistic view of a patient, regardless of where they are in the spectrum.

Waste in the system

Muller: You kind of hit it there, Dana, with the cutting of cost. I was reading a report today, and it was kind of shocking. There is a tremendous amount of waste in the system, as we know. It said that in the US, $600 billion, 17.6 percent of the nation’s GDP, that is focused on healthcare is potentially being misspent. A lot of that is due to unnecessary procedures and tests, as well as operational inefficiency.

From a provider perspective, it's getting a handle on those unnecessary procedures. I’ll give you an example. There’s been an increase in the last decade of elective deliveries, where someone comes in and says that they want to have an early delivery for whatever reason. The impact, unfortunately, is an additional time in the neo-natal intensive care unit (NICU) for the baby.

It drives up a lot of cost and is dangerous for both the mother and child. So, getting a handle on where the waste is within their four walls, whether it’s operationally, unnecessary procedures, or tests and being able to apply Lean Six Sigma, and some of these process is necessary to help reduce that.

Then, you mentioned treatments and how to improve outcomes. Another shocking statistic is that medical errors are the third leading cause of death in the US. In addition to that, employers end up paying almost $40,000 every time someone receives a surgical site infection.
From a provider perspective, it's getting a handle on those unnecessary procedures.

Those medical errors can be everything from a sponge left in a patient, to a mis-dose of a medication, to an infection. Those all lead to a lot of unnecessary death as well as driving up cost not only for the hospital but for the payers of the insurance. These are areas that they will get visibility into to understand where variation is happening and eliminate that.

Finally, a new aspect is customer experience. Somehow, reimbursements are going to be tied to -- and this is new for the medical field -- how I as a patient enjoy, for lack of better term, my experience as the hospital or with my provider, and how engaged I had become in my own care. Those are critical measures that analytics are going to help provide.

Gardner: Now that we have a sense of this massive challenge, what are organizations like Avnet and providers like HP with HAVEn doing that will help us start to get a handle on this?

Kelly: As difficult as it is to reduce complexity in any of these analytic engagements, it's very costly and time consuming to integrate any new system into a hospital. One of the key things is to be able to reduce that time to value from a system that you introduce into the hospital and use to target very specific analytical challenges.

From Avnet’s perspective, we’re bringing a healthcare platform that we’re developing around the HAVEn stack, leveraging some of those great powerful technologies like Vertica and Hadoop, and using those to try to simplify the integration task at the hospitals.

Standardized inputs

We’re building inputs from HL7, which is just a common data format within the hospital, trying to build some standardized inputs from other clinical systems, in order to reduce the heavy lift of integrating a new analytics package in the environment.

In addition, we’re looking to build a unified view of the patient’s data. We want to extend that beyond the walls of the hospital and build a unified platform. The idea is to put a number of different tools and modular analytics on top of that to have some very quick wins, targeted things like we've already talked about, from readmission all the way into some blocking and tackling operational work. It will be everything from patient flow to understanding capacity management.

It will bring a platform that accelerates the integration and analytics delivery in the organization. In addition, we’re going to wrap that into a number of services that range from early assessment to road map and strategy to help with business integration all the way around continuing to build and support the product with the help system.

The goal is to accelerate delivery around the analytics, get the tools that they need to get visibility into the business, and empower the providers and give them a complete view of the patient.

About visibility

Kelly: Any first step with this is about visibility. It opens the eyes around processes in the organization that are problematic and that can be very basic around things like scheduling in the operating room and utilization of that time to length of stay of patients.

A very a quick win is to understand why your patients seem to be continually having problems and being in the bed longer then they should be. It’s being able, while they're filling those beds, to redirect care, case workers, medical care, and everything necessary to help them get out of the hospital sooner and improve their outcomes.

A lot of times, we've seen a look of surprise when we've shown, here is the patient who has been in for 10 days for a procedure that should have only been a two-day stay, and really giving visibility there. That’s the first step, though very basic.

As we start attacking some of these problems around hospital-based infection, we help the provider make sure that they are covering all their bases and doing kind of the best practices, and eliminating the variation between each physician and care provider, you start seeing some real tangible improvements and outcomes in saving peoples lives.

When you see that from any population be it stroke, re-admissions -- as we talked about earlier -- with heart failure and being able to make sure those patients are avoiding things like pneumonia, you bring visibility.
A challenge for a hospital that has acquired a number of physicians is how to get visibility into those physician practices.

Then, in predictive models and optimizing how the providers and the caregivers are working is really key. There are some quick wins, and that’s why traditionally we built these master repositories that we then built reports on top of. It’s a year and a half to delivery for any value, and we’re looking to focus on very specific use cases and trying to tackle them very quickly in a 90- to 120-day period.

Massive opportunity

Muller: The opportunity for HP and our partners is to help enable putting the right data at the finger tips of the people with the potential to generate life saving or lifestyle improving insights. That could be developing a new drug, improving the impatient experience, or helping us identify longer-term issues like genetic or other sorts of congenital diseases.

From our perspective, it’s about providing the underlying platform technology, HAVEn, as the big data platform. The great partner ecosystem that we've developed in Avnet is a wonderful example of an organization that’s taken the powerful platform and very quickly turned that into something that can help not only save money, but as we just talked about, save lives which I think is fantastic.

Gardner: We know that mobile devices are becoming more and more common, not only in patient environments, but in the hospitals and the care-provider organizations. We know the cloud and hybrid cloud services are becoming available and can distribute this data and integrate it across so many more types of processes.

It seems to me that you not only get a benefit from getting to a big-data analysis capability now, but it puts you in a position to be ready when we have more types of data -- more speed, more end points, and, therefore, more requirements for what your infrastructure, whether on premises or in a cloud, can do. Tell me a little bit about what you think the Avnet and HP Solution does for setting you up for these future trend? 

Kelly: At this point, technology today is just not where it needs to be, especially in healthcare. An EKG spits out 1,000 data points per second. There is no way, at this point, without the right technology, that you can actually deal with that.

If we look to a future where providers do less monitoring, so less vital collection, fewer physicals, and all of that is coming from your mobile device, it's coming from intelligent machines. There really needs to be an infrastructure in place to deal with that.

I spent a lot of time working with Vertica even before Avnet. Vertica, Hadoop, and leveraging economy in the area of unstructured data is a technology that is going to allow the scalability and the growth that’s going to be necessary to leverage the data that we need to make it an asset and much less challenge and allow us to transform healthcare.

The key to that is unlocking this tremendous trove of data. In this industry, as you guys have said, it’s very life and death, versus it's just purely a financial incentive.

Targeting big data

Muller: This is an important point that we can’t lose sight of as well. As I said when you and I hosted the previous show, big data is also a big target.

One of the things that every healthcare professional and regulator, every member of the public needs to be mindful of is a large accumulation of sensitive personally identifiable information (PII).

It's not just a governance issue, but it's a question of morals and making sure that we are doing the right thing by the people who are trusting themselves not just with their physical care, but with how they present in society. Medical information can be sensitive when available not just to criminals but even to prospective employers, members of the family, and others.

The other thing we need to be mindful of is we've got to not just collect the big data, but we've got to secure it. We've got to be really mindful of who’s accessing what, when they are accessing, are they appropriately accessing it, and have they done something like taking a copy or moved it else where that could indicate that they have malicious intent.
It's also critical we think about big data in the context of health from a 360-degree perspective.

It's also critical we think about big data in the context of health from a 360-degree perspective.

Kelly: That’s a great point. And to step back a little bit on that, one of the things that brings me a little comfort around that is there are some very clear guidelines in the way of HIPAA around how this data is managed, and we look at it from baking the security into it, in everything from the encryption to the audit ability.

But it’s also training the staff working in these environments and making sure that all of that training is put in place to ensure the safety of that data. One of the things that always leaves me scratching my head is that I can go down the street into the grocery store and buy a bunch of stuff. By the time I get to register, they seem to know more about me than the hospital does when I go to the hospital.

That’s one of the shocking things that make you say you can’t wait until big data gets here. I have a little comfort too, because there are at least laws in place to try to corral that data and make sure everyone is using it correctly.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, December 11, 2013

In remaking itself, HP delivers the IT means for struggling enterprises to remake themselves

BARCELONA — HP, taking a new leap in its marathon to remake itself, has further assembled, refined and delivered the IT infrastructure, big data and cloud means by which other large enterprises can effectively remake themselves.

This week here at the HP Discover 2013 conference — despite the gulf of 70 years but originating in the same Silicon Valley byways — has found a kindred spirit in … Facebook. The social media juggernaut, also based in Palo Alto, is often pointed to with both envy and amazement at its new-found and secretive feats of IT data center scale, reach, efficiency and adaptability. It’s a mantle of technological respect that HP itself once long held.

So for Facebook’s CIO, Tim Campos, to get on stage in Europe and declare that, "A partner like HP Vertica thinks like we do” and is a “key part” of Facebook’s big data capabilities, is one the best endorsements, err … “likes,” that any modern IT infrastructure vendor could hope for. With Facebook’s data growing by 500 terabytes a day, this is quite a coup for HP’s analytics platform, which is part of its HAVEn initiative.

I fully expected HP to shout it all day from the verdant ancient hilltops with echoes through the crooked 12th century streets and across the packed soccer stadium in this beautiful Mediterranean port city: “Facebook runs on HP Vertica."

However odd it is that the very newest of California IT gold rush denizens rubs off its glow on the very oldest, HP has nonetheless quickly made itself a key supplier of some of the most important technologies of the present corporate era: cloud computing and big data processing.

And while the punditry often fixates on the vital signs — or lack of — in the Windows PC business, HP is rightfully and successfully chasing the bigger long-term vendor opportunity: the all-purpose software-defined yet hardware-optimized data center.

News you can use

Announcements here at Discover show how HP is advancing these core technologies that will prove indispensable in helping enterprises and service provider alike to master their data centers, exploit big data, expand mobile, and prepare for cloud adoption.

Among the innovations and updates announced at the conference were a HP’s Converged System, Converged Cloud System, new Cloud Service Automation, a Hybrid Cloud Management platform, and Propel for acquiring and using new applications. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP Vertica also scored a major goal with the announcement an innovative collaboration with Conservation International (CI) — a leading non-governmental organization dedicated to protecting nature for people — to dramatically improve the accuracy and speed of analysis of data collection in environmental science.

The initiative, called HP Earth Insights, uses the Vertica platform to deliver near-real-time analytics and is already yielding new information that indicates a decline in a significant percentage of species monitored. The project serves as an early warning system for conservation efforts, enabling proactive responses to environmental threats.

HP Earth Insights applies big data technology to the ecological research being conducted across 16 tropical forests around the world by CI, the Smithsonian Institution, and the Wildlife Conservation Society, as part of the Tropical Ecology Assessment and Monitoring (TEAM) Network. Data and analysis from HP Earth Insights will be shared with protected area managers to develop policies regarding hunting and other causes of species loss in these ecosystems.

Converged system

The new HP ConvergedSystem products deliver a total systems experience that simplifies IT, enabling clients to go from order to operations in as few as 20 days. With quick deployment, intuitive management, and system-level support, IT organizations can shift their focus from systems integration to delivering the applications that power their business. 

HP ConvergedSystem, a new product line engineered from the ground up, was built using HP Converged Infrastructure’s servers, storage, networking, software and services.
HP ConvergedSystem products also come with a unified support model from HP Proactive Care, providing clients with a single point of accountability for all system components, including partner software. HP also offers consulting capabilities to plan, design and integrate HP ConvergedSystem offerings into broader cloud, big-data, and virtualization solutions, while mapping physical and virtual workloads onto clients’ new HP ConvergedSystem.

Hybrid cloud

As enterprises embrace new delivery models, one of the biggest decisions chief information officers (CIOs) need to make on their cloud journey is determining where applications or workloads should live -- on traditional IT or in the cloud. Often, applications will continue to live across multiple environments, and hybrid delivery becomes an imperative. Solutions announced this week at HP Discover in Barcelona build on this strategy, including the introduction of the next-generation HP CloudSystem, HP’s flagship offering for building and managing private clouds.

This includes a new consumer-inspired user interface, simplified management tools, and an improved deployment process that enable customers to set up and deploy a complete private cloud environment in just hours, compared to weeks for other private cloud solutions. As the foundation of a hybrid cloud solution, HP CloudSystem bursts to multiple public cloud platforms, including three new ones: Microsoft Windows Azure, and platforms from Arsys, a European-based cloud computing provider, and SFR, a French telecommunications company.

HP CloudSystem integrates OpenStack-based HP Cloud OS technology,  providing customers a hardened, tested OpenStack distribution that is easy to install and manage. The next-generation CloudSystem also incorporates the new HP Hybrid Cloud Management Platform, a comprehensive management solution that enables enterprises and service providers to deliver secure cloud services across public or private clouds, as well as traditional IT.
The new consulting services help customers evolve their cloud strategy and vision into a solution based on delivering business outcomes and ready for implementation.

The Hybrid Cloud Management platform integrates HP Cloud Service Automation (CSA) version 4.0 and includes native support for both HP CloudOS with OpenStack and the open-source standard TOSCA (Topology and Orchestration Specification for Cloud Applications), enabling easier application portability and management of hybrid and heterogeneous IT environments.

HP is also adding a new hybrid-design capability to its global professional services capabilities. HP Hybrid Cloud Design Professional Services offer a highly modular design approach to help organizations architect a cloud solution that aligns with their technical, organizational, and business needs. The new consulting services help customers evolve their cloud strategy and vision into a solution based on delivering business outcomes and ready for implementation.

Also expanding is the Virtual Private Cloud (VPC) offering, which helps customers take advantage of the economics of a public cloud with the security and control of a private cloud solution. The new HP VPC Portfolio provides a range of VPC solutions, from standardized self service to a customized, fully managed service model.

All HP VPC solutions deliver the security and control of a private cloud in a flexible, cost-effective, multitenant cloud environment. The latest version of HP Managed VPC now allows customers to choose among virtual or physical configurations, multiple tiers of storage, hypervisors, and network connectivity types. 

As part of its overall hybrid delivery, HP offers HP Flexible Capacity (FC), an on-premises solution of enterprise-grade infrastructure with cloud economics, including pay-for-use and instant scalability of server, storage, networking and software capacity. This allows for treating cloud costs as operating expenses, rather than capital expenses. It also supports existing customer third-party equipment for a true heterogeneous environment.  HP CloudSystem also can burst to HP FC infrastructure for customers needing the on-demand capacity without the data ever leaving the premises.

Self-service capability

HP is also offering HP Propel, a cloud-based service solution that enables IT organizations to deliver self-service capabilities to end users, with an eye toward improved service delivery, quicker time to value, and lower administration costs.

Available on both desktop and mobile platforms, the free version of HP Propel includes a standard service catalog; the HP Propel Knowledge Management solution, which accelerates user self-service with immediate access to information needed; and IT news feeds delivered via RSS.

The premium version extends those capabilities with an enhanced catalog; advanced authentication methods, such as single sign-on; and access to the extended Propel Knowledge Management solution. Clients also can integrate their on-premises service management solutions through the HP-hosted Propel Service Exchange.

Propel is built on an open and extensible service exchange, with the ability to add catalogs and services as clients’ demands evolve. To further simplify configuration, administration and maintenance of the solution, HP and its worldwide partners provide comprehensive and strategic assessment and transformation services.
Propel is built on an open and extensible service exchange, with the ability to add catalogs and services as clients’ demands evolve.

Propel will be available in the Americas and Europe, the Middle East and Africa in January and in Asia Pacific and Japan in March. Additional information is available at www.hp.com/go/propel.

HP further announced Converged Storage innovations that restore business productivity at record speed, reduce All Flash Array costs significantly while increasing performance and quality-of-service (QoS) capabilities, and expand agility by enabling cloud storage application mobility.

Additions to the HP Converged Storage portfolio include the next generation of HP StoreOnce Backup and HP StoreAll Archive systems to reduce risk as well as enhancements to HP 3PAR StoreServ Storage with cost-reduced flash technology, performance improvements, and software QoS enhancements to meet the needs of IT-as-a-service environments.

In other HP news, HP has announced a new lineup of servers designed to save space and reduce cost, but more importantly cut energy usage. The Moonshot servers are small servers that can be packed into dense arrays and aid with heavy workload computing.

The web servers are designed and tailored for specific workloads to deliver optimum performance. These low power servers share management, power, cooling, networking, and storage. This architecture is key to achieving 8x efficiency at scale, and enabling 3x faster innovation cycle. The power-saving feature addresses the problem cause by power consumption in cloud operations.

You may also be interested in:

Monday, December 9, 2013

Enterprise mobile management demands a rethinking of work, play and productivity, says Dell's Tom Kendra

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Dell Software.

The next BriefingsDirect innovator interview targets how the recent and rapid evolution of mobile and client management requirements have caused considerable complexity and confusion.

We’ll examine how incomplete solutions and a lack of a clear pan-client strategy have hampered the move to broader mobile support at enterprises and mid-market companies alike. This state of muddled direction has put IT in a bind, while frustrating users who are eager to gain greater productivity and flexibility in their work habits, and device choice.

To share his insights on how to better prepare for a mobile-enablement future that quickly complements other IT imperatives such as cloud, big data, and even more efficient data centers, we’re joined by Tom Kendra, Vice President and General Manager, Systems Management at Dell Software. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Dell is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Kendra: There is an enormous amount of conversation now in this mobility area and it’s moving very, very rapidly. This is an evolving space. There are a lot of moving parts, and hopefully, in the next few minutes, we’ll be able to dive into some of those.

Gardner: People have been dealing with a fast-moving client environment for decades. Things have always changed rapidly with the client. We went through the Web transition and client-server. We’ve seen all kinds of different ways of getting apps to devices. What’s different about the mobile and BYOD challenges today?

Speed and agility

Kendra: Our industry is characterized by speed and agility. Right now, the big drivers causing the acceleration can be put into three categories: the amount and type of data that’s available, all the different ways and devices for accessing this data, as well as the evolving preferences and policies for dictating who, what, and how data is shared.

Kendra
For example, training videos, charts and graphs versus just text, and the ability to combine these assets and deliver them in a way that allows a front-line salesperson, a service desk staffer or anyone else in the corporate ecosystem to satisfy customer requests much more efficiently and rapidly.

The second area is the number of devices we need to support. You touched on this earlier. In yesterday’s world -- and yesterday was a very short time ago -- mobility was all around the PC. Then, it was around a corporate-issued device, most likely a business phone. Now, all of a sudden, there are many, many, many more devices that corporations are issuing as well as devices people are bringing into their work environment at a rapid pace.

We’ve moved from laptops to smartphones that were corporate-issued to tablets. Soon, we’ll get more and more wearables in the environment and machine-to-machine communications will become more prevalent. All of these essentially create unprecedented opportunities, yet also complicate the problem.

The third area that’s driving change at a much higher velocity is the ever-evolving attitude about work and work-life balance. And, along with that ... privacy. Employees want to use what they’re comfortable using at work and they want to make sure their information and privacy rights are understood and protected. These three items are really driving the acceleration.
Employees want to use what they’re comfortable using at work and they want to make sure their information and privacy rights are understood and protected.

Gardner: And the response to this complexity so far, Tom, has been some suite, some mobile device management (MDM) approaches, trying to have multiple paths to these devices and supporting multiple types of infrastructure behind that. Why have these not yet reached a point where enterprises are comfortable? Why have we not yet solved the problem of how to do this well?

Kendra: When you think about all the different requirements, you realize there are many ways to achieve the objectives. You might postulate that, in certain industries, there are regulatory requirements that somewhat dictate a solution. So a lot of organizations in those industries move down one path. In industries where you don’t have quite the same regulatory environment, you might have more flexibility to choose yet another path.

The range of available options is wide, and many organizations have experimented with numerous approaches. Now, we’ve gotten to the point where we have the unique opportunity -- today and over the next couple of years -- to think about how we consolidate these approaches into a more integrated, holistic mobility solution that elevates data security and mobile workforce productivity.

None of them are inherently good or bad. They all serve a purpose. We have to ask, “How do I preserve the uniqueness of what those different approaches offer, while bringing together the similarities?”

More efficient

How can you take advantage of similarities, such as the definition of roles or which roles within the organization have access to what types of data? The commonalities may be contextual in the sense that I’m going to provide this kind of data access if you are in these kinds of locations on these kinds of devices. Those things we could probably pull together and manage in a more efficient way.

But we still want to give companies the flexibility to determine what it means to support different form factors, which means you need to understand the characteristics of a wearable device versus a smartphone or an iPad.

I also need to understand the different use cases that are most prevalent in my organization. If I’m a factory worker, for example, it may be better to have a wearable in the future, rather than a tablet. In the medical field, however, tablets are probably preferred over wearables because of the need to enter, modify and view electronic medical records. So there are different tradeoffs, and we want to be able to support all of them.

Gardner: Looking again at the historical perspective, in the past when IT was faced with a complexity --  too many moving parts, too many variables -- they could walk in and say, “Here’s the solution. This is the box we’ve put around it. You have to use it this way. That may cause you some frustration, but it will solve the bigger problem.” And they could get away with that.

Today, that’s really no longer the case. There’s shadow IT. There’s consumerization of IT. There are people using cloud services on their own volition without even going through any of the lines of business. It's right down to the individual user. How does IT now find a way to get some control, get the needed enterprise requirements met, but recognize that their ability to dictate terms is less than it used to be?
Line-of-business owners are coming forward to request that different employees or organizational groups have access to information from a multitude of devices.

Kendra: You’re bringing up a very big issue. Companies today are getting a lot of pressure from individuals bringing in their own technology. One of the case studies you and I have been following for many months is Green Clinic Health System, a physician-owned community healthcare organization in Louisiana. As you know, Jason Thomas, the CIO and IT Director, has been very open about discussing their progress -- and the many challenges -- encountered on their BYOD journey. 

As part of Green Clinic’s goal to ensure excellent patient care, the 50 physicians started bringing in different technologies, including tablets and smartphones, and then asked IT to support them. This is a great example of what happens when major organizational stakeholders -- Green Clinic’s physicians, in this case -- make technology selections to deliver better service. With Green Clinic, this meant giving doctors and clinicians anytime, anywhere access to highly sensitive patient information on any Internet-connected device without compromising security or HIPAA compliance requirements. 

In other kinds of businesses, similar selection processes are underway as line-of-business owners are coming forward to request that different employees or organizational groups have access to information from a multitude of devices. Now, IT has to figure out how to put the security in place to make sure corporate information is protected while still providing the flexibility for users to do their jobs using preferred devices.

Shadow IT often emerges in scenarios where IT puts too many restrictions on device choice, which leads line-of-business owners and their constituents to seek workarounds. As we all know, this can open the door to all sorts of security risks. When we think about the Green Clinic example, you can see that Jason Thomas strives to be as flexible as possible in supporting preferred devices while taking all the necessary precautions to protect patient privacy and HIPAA regulations.

Similar shift

Gardner: When we think about how IT needs to approach this differently -- perhaps embracing and extending what's going on, while also being mindful of those important compliance risk and governance issues -- we’re seeing a similar shift from the IT vendors.

I think there’s such a large opportunity in the market for mobile, for the modern data center, for the management of the data and the apps out to these devices, that we are seeing vendor models shifting, and we’re seeing acquisitions happening. What's different this time from the vendor perspective?

Kendra: The industry has to move from a position of providing a series of point-solutions to guiding and leading with a strategy for pulling all these things together. Again, it comes down to giving companies a plan for the future that keeps pace with their emerging requirements, accommodates existing skill sets and grows with them as mobility becomes more ingrained in their ways of doing business. That’s the game -- and that’s the hard part.

The types of solutions Dell is bringing to the market embrace what’s needed today while being flexible enough to accommodate future applications and evolving data access needs.
The goal is to leverage customers’ existing investments in their current infrastructures and find ways to build and expand on those with foundational elements that can scale easily as needs dictate. You can imagine a scenario in which an IT shop is not going to have the resources, especially in the mid-market, to embrace multiple ways of managing, securing, granting access, or all of these things.

Long-term affair

Gardner: That’s why I think this is easily going to be a three- to five-year affair. Perhaps it will be longer, because we’re not just talking about plopping in a mobile device management capability. We’re really talking about rethinking processes, business models, productivity, and how you acquire working skills. We’re no longer just doing word processing instead of using typewriters. We’re not just repaving cow paths. We’re charting something quite new.

There is that interrelationship between the technology capabilities and the work. I think that’s something that hasn’t been thought out. Companies were perhaps thinking, “We'll just add mobile devices onto the roster of things that we support.” But that’s probably not enough. How does the vision from that aspect work, when you try to do both a technology shift and a business transformation?

Kendra: You used the term “plop in a MDM solution.” It's important to understand that the efforts and the initiatives that have taken place have all been really valuable. We’ve learned a lot. The issue is, as you are talking about, how to evolve this strategy and why.

Equally important is having an understanding of the business transformation that takes place when you put all these elements together—it’s much more far-reaching than simply “plopping” in a point solution for a particular aspect.

In yesterday's world, I might have had the right or ability to wipe entire devices. Let’s look at the corporate-issued device scenario. The company owns the device and therefore owns the data that resides or is accessed on that device.  Wiping the device would be entirely within my domain or purview. But in a BYOD environment, I’m not going to be able to wipe a device. So, I have to think about things much differently than I did before.
Users, based on their roles, need to have access to applications and data, and they need to have it served up in a very easy, user-friendly manner.

As companies evolve their own mobility strategies, it’s important to leverage their learnings, while remaining focused on enhancing their users’ experiences and not sacrificing them. That’s why some of the research we’ve done suggests there is a very high reconsideration rate in terms of people and their current mobility solutions.

They’ve tried various approaches and point solutions and some worked out, but others have found these solutions lacking, which has caused gaps in usability, user adoption, and manageability. Our goal is to address and close those gaps.

Gardner: Let's get to what needs to happen. It seems to me that containerization has come to the fore, a way of accessing different types of applications, acquiring those applications perhaps on the fly, rather than rolled out for the entire populace of the workforce over time. Tell us a little bit more about how you see this working better, moving toward a more supported, agile, business-friendly and user-productivity vision or future for mobility.

Kendra: Giving users the ability to acquire applications on the fly is hugely important as users, based on their roles, need to have access to applications and data, and they need to have it served up in a very easy, user-friendly manner.

The crucial considerations here are role-based, potentially even location-based. Do I really want to allow the same kinds of access to information if I’m in a coffee house in China as I do if I am in my own office? Does data need to be resident on the device once I’m offline? Those are the kinds of considerations we need to think about.

Seamless experience

What’s needed to ensure a seamless offline experience is where the issue of containerization arises. There are capabilities that enable users to view and access information in a secure manner when they’re connected to an Internet-enabled device.

But what happens when those same users are offline? Secure container-based workspaces allow me to take documents, data or other corporate information from that online experience and have it accessible whether I’m on a plane, in a tunnel or outside a wi-fi area.

The container provides a protected place to store, view, manage and use that data. If I need to wipe it later on, I can just wipe the information stored in the container, not the entire device, which likely will have personal information and other unrelated data. With the secure digital workspace, it’s easy to restrict how corporate information is used, and policies can be readily established to govern which data can go outside the container or be used by other applications.

The industry is clearing moving in this direction, and it’s critical that we make it across corporate applications.
Heretofore, it's been largely device-centric and management-centric, as opposed to user productivity role-centric.

Gardner: If I hear you correctly, Tom, it sounds as if we’re going to be able to bring down the right container, for the right device, at the right time, for the right process and/or data or application activity. That’s putting more onus on the data center, but that’s probably a good thing. That gives IT the control that they want and need.

It also seems to me that, when you have that flexibility on the device and you can manage sessions and roles and permissions, this can be a cost and productivity benefit to the operators of that data center. They can start to do better data management, dedupe, reduce their storage costs, and do backup and recovery with more of a holistic, agile or strategic approach. They can also meter out the resources they need to support these workloads with much greater efficiency, predict those workloads, and then react to them very swiftly.

We’ve talked so far about all how difficult and tough this is. It sounds like if you crack this nut properly, not only do you get that benefit of the user experience and the mobility factor, but you can also do quite a bit of a good IT blocking and tackling on the backend. Am I reading that correctly or am I overstating that?

Kendra: I think you’re absolutely on the money. Take us as individuals. You may have a corporate-issued laptop. You might have a corporate-issued phone. You also may have an iPad, a Dell tablet, or another type of tablet at home. For me, it’s important to know what Tom Kendra has access to across all of those devices in a very simple manner.

I don’t want to set up a different approach based on each individual device. I want to set up a way of viewing my data, based on my role, permissions and work needs. Heretofore, it's been largely device-centric and management-centric, as opposed to user productivity role-centric.

Holistic manner

The Dell position -- and where we see the industry going -- is consolidating much of the management and security around those devices in a holistic manner, so I can focus on what the individual needs. In doing so, it’s much easier to serve the appropriate data access in a fairly seamless manner. This approach rings true with many of our customers who want to spend more resources on driving their businesses and facilitating increased user productivity and fewer resources on managing a myriad of multiple systems.

Gardner: By bringing the point of management -- the point of power, the point of control and enablement -- back into the data center, you’re also able to link up to your legacy assets much more easily than if you had to somehow retrofit those legacy assets out to a specific device platform or a device's format.

Kendra: You’re hitting on the importance of flexibility. Earlier, we said the user experience is a major driver along with ensuring flexibility for both the employee and IT. Reducing risk exposure is another crucial driver and by taking a more holistic approach to mobility enablement, we can address policy enforcement based on roles across all those devices. Not only does this lower exposure to risk, it elevates data security since you’re addressing it from the user point of view instead of trying to sync up three or four different devices with multiple user profiles.

Gardner: And if I am thinking at that data center level, it will give me choices on where and how I create that data center, where I locate it, how I produce it, and how I host it. It opens up a lot more opportunity for utilizing public cloud services, or a combination that best suits my needs and that can shift and adapt over time.

Kendra: It really does come down to freedom of choice, doesn’t it? The freedom to use whatever device in whichever data center combination that makes the most sense for the business is really what everyone is striving for. Many of Dell’s customers are moving toward environments where they are taking both on-premise and off-premise compute resources. They think about applications as, “I can serve them up from inside my company or I can serve them up from outside my company.”
We’re a very trusted brand, and companies are interested in what Dell has to say.

The issue comes down to the fact that I want to integrate wherever possible. I want to serve up the data and the applications when needed and how needed, and I want to make sure that I have the appropriate management and security controls over those things.

Gardner: Okay, I think I have the vision much more clearly now. I expect we’re going to be hearing more from Dell Software on ways to execute toward that vision. But before we move on to some examples of how this works in practice, why Dell? What is it about Dell now that you think puts you all in a position to deliver the means to accomplish this vision?

Kendra: Dell has relationships with millions of customers around the world. We’re a very trusted brand, and companies are interested in what Dell has to say. People are interested in where Dell is going. If you think about the PC market, for example, Dell has about an 11.9 percent worldwide market share. There are hundreds and hundreds of millions of PCs used in the world today. I believe there were approximately 82 million PCs sold during the third quarter of 2013.

The point here is that we have a natural entrée into this discussion and the discussion goes like this: Dell has been a trusted supplier of hardware and we’ve played an important role in helping you drive your business, increase productivity and enable your people to do more, which has produced some amazing business results. As you move into thinking about the management of additional capabilities around mobile, Dell has hardware and software that you should consider.

World-class technologies

Once we’re in the conversation, we can highlight Dell’s world-class technologies, including end-user computing, servers, storage, networking, security, data protection, software, and services.

As a trusted brand with world-class technologies and proven solutions, Dell is ideally suited to help bring together the devices and underlying security, encryption, and management technologies required to deliver a unified mobile enablement solution. We can pull it all together and deliver it to the mid-market probably better than anyone else.

So the Dell advantages are numerous. In our announcements over the next few months, you’ll see how we’re bringing these capabilities together and making it easier for our customers to acquire and use them at a lower cost and faster time to value.

Gardner: One of the things that I'd like to do, Tom, is not just to tell how things are, but to show. Do we have some examples of organizations -- you already mentioned one with the Green Clinic -- that have bitten the bullet and recognized the strategic approach, the flexibility on the client, leveraging containerization, retaining control and governance, risk, and compliance requirements through IT, but giving those end-users the power they want? What's it like when this actually works?

Kendra: When it actually works, it's a beautiful thing. Let’s start there. We work with customers around the world and, as you can imagine, given people's desire for their own privacy, a lot of them don't want their names used. But we’re working with a major North American bank that has the problems that we have been discussing.
The concept of an integrated suite of policy and management capabilities is going to be extremely important going forward.

They have 20,000-plus corporate-owned smartphones, growing to some 35,000 in the next year. They have more than a thousand iPads in place, growing rapidly. They have a desktop virtualization (VDI) solution, but the VDI solution, as we spoke about earlier, really doesn't support the offline experience that they need.

They are trying to leverage an 850-person IT department that has worldwide responsibilities, all the things that we spoke about earlier. And they use technology from companies that haven’t evolved as quickly as they should have. So they're wondering whether those companies are going to be around in the future.

This is the classic case of, “I have a lot of technology deployed. I need to move to a container solution to support both online and offline experiences, and my IT budget is being squeezed.” So how do you do this? It goes back to the things we talked about.

First, I need to leverage what I have. Second, I need to pick solutions that can support multiple environments rather than a point solution for each environment. Third, I need to think about the future, and in this case, that entails a rapid explosion of mobile devices.

I need to mobilize rapidly without compromising security or the user experience. The concept of an integrated suite of policy and management capabilities is going to be extremely important to my organization going forward.

Mobile wave

Gardner: Dell is approaching this enterprise mobility manager market with an aggressive perspective, recognizing a big opportunity in the market and an opportunity that they are uniquely positioned to go at. There’s not too much emphasis on the client alone and not just emphasis on the data center. It really needs to be a bridging type of a value-add these days. Can you tease us a little bit about some upcoming news? What should we expect next?

Kendra: The solutions we announced in April essentially laid out our vision of Dell’s evolving mobility strategies. We talked about the need to consolidate mobility management systems and streamline enablement. We focused on the importance of leveraging world-class security, including secure remote access and encryption. And the market has responded well to Dell's point of view.

As we move forward, we have the opportunity to get much more prescriptive in describing our unified approach that consolidates the capabilities organizations need to ensure secure control over their corporate data while still ensuring an excellent user experience.

You’ll see more from us detailing how those integrated solutions come together to deliver fast time to value. You'll also see different delivery vehicles, giving our customers the flexibility to choose from on premise, software-as-a-service (SaaS) based or cloud-based approaches. You'll see additional device support, and you'll see containerization.

Leverage advantages

We plan to leverage our advantages, our best-in-class capabilities around security, encryption, device management; this common functionality approach. We plan to leverage all of that in upcoming announcements.

As we take the analyst community through our end-to-end mobile/BYOD enablement plans, we’ve gotten high marks for our approach and direction. Our discussions involving Dell’s broad OS support, embedded security, unified management and proven customer relationship all have been well received.

Our next step is to make sure that, as we announce and deliver in the coming months, customers absolutely understand what we have and where we're going. We think they're going be very excited about it. We think we're in the sweet spot of the mid-market and the upper mid-market in terms of what solutions they need to ease their mobile enablement objectives.
We also believe we can provide a unique point-of-view and compelling technology roadmaps for those very large customers who may have a longer journey in their deployments or rollout.

We also believe we can provide a unique point-of-view and compelling technology roadmaps for those very large customers who may have a longer journey in their deployments or rollout.

We're very excited about what we're doing. The specifics of what we're doing play out in early December, January, and beyond. You'll see a rolling thunder of announcements from Dell, much like we did in April. We’ll lay out the solutions. We’ll talk about how these products come together and we’ll deliver.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Dell Software.

You may also be interested in: