Friday, September 20, 2013

Are you ready for the convergence of new disruptive technologies?

The following guest post comes courtesy of Dr. Chris Harding, Director of Interoperability and SOA at The Open Group.

By Chris Harding

The convergence of technical phenomena such as cloud, mobile and social computing, big data analysis, and the Internet of things that is being addressed by The Open Group’s Open Platform 3.0 Forum will transform the way that you use information technology. Are you ready? Take our survey at https://www.surveymonkey.com/s/convergent_tech

What the technology can do

Mobile and social computing are leading the way. Recently, the launch of new iPhone models and the announcement of the Twitter stock flotation were headline news, reflecting the importance that these technologies now have for business. For example, banks use mobile text messaging to alert customers to security issues. Retailers use social media to understand their markets and communicate with potential customers.

Harding
Other technologies are close behind. In Formula One motor racing, sensors monitor vehicle operation and feed real-time information to the support teams, leading to improved design, greater safety, and lower costs. This approach could soon become routine for cars on the public roads too. (Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Many exciting new applications are being discussed. Stores could use sensors to capture customer behavior while browsing the goods on display, and give them targeted information and advice via their mobile devices. Medical professionals could monitor hospital patients and receive alerts of significant changes. Researchers could use shared cloud services and big data analysis to detect patterns in this information, and develop treatments, including for complex or uncommon conditions that are hard to understand using traditional methods. The potential is massive, and we are only just beginning to see it.

What the analysts say

Market analysts agree on the importance of the new technologies.

Gartner uses the term “Nexus of Forces” to describe the convergence and mutual reinforcement of social, mobility, cloud and information patterns that drive new business scenarios, and says that, although these forces are innovative and disruptive on their own, together they are revolutionizing business and society, disrupting old business models and creating new leaders.

IDC predicts that a combination of social cloud, mobile, and big data technologies will drive around 90% of all the growth in the IT market through 2020, and uses the term “third platform” to describe this combination.

The Open Group will identify the standards that will make Gartner’s Nexus of Forces and IDC’s Third Platform commercial realities. This will be the definition of Open Platform 3.0.

Disrupting enterprise use of IT

The new technologies are bringing new opportunities, but their use raises problems. In particular, end users find that working through IT departments in the traditional way is not satisfactory. The delays are too great for rapid, innovative development. They want to use the new technologies directly – “hands on."

Increasingly, business departments are buying technology directly, by-passing their IT departments. Traditionally, the bulk of an enterprise’s IT budget was spent by the IT department and went on maintenance. A significant proportion is now spent by the business departments, on new technology.
Business analysts are increasingly using technical tools, and even doing application development, using exposed APIs.

Business and IT are not different worlds any more. Business analysts are increasingly using technical tools, and even doing application development, using exposed APIs. For example, marketing folk do search engine optimization, use business information tools, and analyze traffic on Twitter. Such operations require less IT skill than formerly because the new systems are easy to use. Also, users are becoming more IT-savvy. This is a revolution in business use of IT, comparable to the use of spreadsheets in the 1980s.

Also, business departments are hiring traditional application developers, who would once have only been found in IT departments.

Are you ready?

These disruptive new technologies are changing, not just the IT architecture, but also the business architecture of the enterprises that use them. This is a sea change that affects us all.

The introduction of the PC had a dramatic impact on the way enterprises used IT, taking much of the technology out of the computer room and into the office. The new revolution is taking it out of the office and into the pocket. Cell phones and tablets give you windows into the world, not just your personal collection of applications and information. Through those windows you can see your friends, your best route home, what your customers like, how well your production processes are working, or whatever else you need to conduct your life and business.
You must learn how to tailor and combine the information and services available to you, to meet your personal objectives.

This will change the way you work. You must learn how to tailor and combine the information and services available to you, to meet your personal objectives. If your role is to provide or help to provide IT services, you must learn how to support users working in this new way.

To negotiate this change successfully, and take advantage of it, each of us must understand what is happening, and how ready we are to deal with it.

The Open Group is conducting a survey of people’s reactions to the convergence of Cloud and other new technologies. Take the survey, to input your state of readiness, and get early sight of the results, to see how you compare with everyone else.

To take the survey, visit https://www.surveymonkey.com/s/convergent_tech

The following guest post comes courtesy of Dr. Chris Harding, Director of Interoperability and SOA at The Open Group.

You may also be interested in:

Thursday, September 19, 2013

How MZI HealthCare identifies big data patient productivity gems using HP Vertica

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series details how a healthcare solutions provider leverages big-data capabilities.

Learn how MZI Healthcare has deployed the HP Vertica Analytics Platform to help their customers better understand population healthcare trends and identify how well healthcare processes are working.

To discover more about how high-performance and cost-effective big-data processing forms a foundational element to improving overall healthcare quality and efficiency, join Greg Gootee, Product Manager at MZI Healthcare, based in Orlando. The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
 
Here are some excerpts:
Gardner: How important is big data turning out to be for how healthcare is being administered?

Gootee: Change in healthcare is really dramatic, maybe more dramatic than any other industry has ever been. If you look at other industries where they have actually been able to spread that change over time, in healthcare it's being rapidly accelerated.

Gootee
In the past, data had been stored in multiple systems and multiple areas on given patients. It's been difficult for providers and organizations to make informed decisions about that patient and their healthcare. So we see a lot of change in being able to bring that data together and understand it better.

Gardner: Tell us about MZI, what you do, who your customers are, and where you're going to be taking this big data ability in the future.

Gootee: MZI Healthcare has predominantly been working on the payer side. We have a product that's been on the market for over 25 years helping with benefit administration and the lines of payers and different independent physician associations (IPAs) and third-party administrators (TPAs).

Our customers have always had a very tough time bringing in data from different sources. A little over two years ago, MZI decided to look at how we could leverage that data to help our customers better understand their risk and their patients, and ultimately change the outcomes for those patients.

Predictive analysis

Gardner: I think that's how the newer regulatory environment is lining up in terms of compensation. This is about outcomes, rather than procedures. Tell us about your requirements for big data in order to start doing more of that predictive analysis.

Gootee: If you think about how data has been stored in the past for patients across their continuum of care, where, as they went from facility to facility, and physician to physician, it's really been so spread apart. It's been difficult to help understand even how the treatments are affecting that patient.

I've talked a lot about my aunt in previous interviews. Last year, she went into a coma, not because the doctors weren't doing the right thing, but because they were unable to understand what the other doctors were doing.

She went to many specialists and took medication from each one of those to help with her given problem, but what happened was there was an interaction with medication. They didn't even know if she’d come out of the coma.

These things happen every day. Doctors make informed decisions from their experience and the data that they have. So it's critical that they can actually see all the information that's available to them.

When we look at healthcare and how it's changing, for example the Affordable Care Act, one of the main focuses is obviously cost. We all know that healthcare is growing at a rate that's just unsustainable, and while that's the main focus, it's different this time.
Not only are we trying to reduce cost, but we are trying to increase the care that's given to those patients.

We've done that before. In the Clinton Administration we had a kind of HMO and it really made a dramatic difference on cost. It was working, but it didn't give people a choice. There was no basis on outcomes, and the quality of care wasn't there.

This time around, that's probably the major difference. Not only are we trying to reduce cost, but we are trying to increase the care that's given to those patients. That's really vital to making the healthcare system a better system throughout the United States.

Gardner: Given the size of the data, the disparate nature of the data, more-and-more human data will be brought to bear. What were your technical requirements, and what was the journey that you took in finding the right infrastructure?

Gootee: We had a couple of requirements that were critical. When we work with small- and medium-size organizations (SMBs), they really don't have the funds to put in a large system themselves. So our goal was that we wanted to do something similar to what Apple has done with the iPhone. We wanted to take multiple things, put them into one area, and reduce that price point for our customers.

One of the critical things that we wanted to look at was overall price point. That included how we manage those systems and, when we looked at Vertica, one of the things that we found very appealing was that the management of that system is minimal.

High-end analytics

The other critical thing was speed, being able to deliver high-end analytics at the point of care, instead of two or three months later, and Vertica really produced. In fact, we did a proof of concept with them. It was almost unbelievable some of the queries that ran and the speed at which that data came back to us.

You hear things like that and see it through the conference, no matter what volume you may have. It's very good. Those were some of our requirements, and we were able to put that in the cloud. We run in the Amazon cloud and we were able to deliver that content to the people that need it at the right time at a really low price point.

Gardner: Let me understand also the requirement for concurrency. If you have this posted on Amazon Web Services, you're then opening this up to many different organizations and many different queriers. Is there an issue for the volume of queries happening simultaneously, or concurrency? Has that been something you've been able to work through?

Gootee: That's another value add that we get. The ability to expand and scale the Vertica system along with the scalability that we get with the Amazon allows us to deliver that information. No matter what type of queries we're getting, we can expand that automatically. We can grow that need, and it really makes a large difference in how we could be competitive in the marketplace.

Gardner: I suppose another dynamic to this on the economic side is the predictability of your cost.
Cloud services take some of that unknown away. It lets you scale as you need it and scale back if you don't need it.

Gootee: If you look at traditional ways that we've delivered software or a content before, you always over-buy, because you don’t know what it's going to be. Then, at some point, you don't have enough resources to deliver. Cloud services take some of that unknown away. It lets you scale as you need it and scale back if you don't need it.

So it's the flexibility for us. We're not a large company, and what's exciting about this is that these technologies help us do the same thing that the big guys do. It really lets our small company compete in a larger marketplace.

Gardner: Going to the population health equation and the types of data and information, is this something that's of interest to you? How important is this ability to get at all the information in all the different formats as you move forward?

Gootee: That's very critical for us. The way we interact in America and around the world has changed a lot. The HP HAVEn platform provides us with some opportunities to improve on what we have with healthcare's big security concerns, and the issue of the mobility of data. Getting it anywhere is critical to us, as well as better understanding how that data is changing.

We've heard from a lot of companies here that really are driving that user experience. More-and-more companies are going to be competing on how they can deliver things to a user in the way that they like it. That's critical to us, and that [HP] platform really gives us the ability to do that.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, September 18, 2013

Synthetic APIs approach improves fragmented data acquisition for Thomson Reuters’ content sharing platform

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Kapow Software, a Kofax company.

The next BriefingsDirect innovator interview examines the improved data use benefits at Thomson Reuters in London.

Part of a discussion series on how innovative companies are dodging data complexity through the use of Synthetic APIs, learn here how, from across many different industries and regions of the globe, inventive companies are able to get the best information delivered to those who can act on it with speed and at massive scale.

Here to explain how improved information integration and delivery can be made into business success, we're joined by Pedro Saraiva, product manager for Content Shared Platforms and Rapid Sourcing at Thomson Reuters. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Kapow Software is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: You first launched Thomson Reuters content-sharing platform over four years ago after joining the company in 1996. And the platform there now enables agile delivery of automated content-acquisition solutions across a range of content areas. What are you delivering and to whom?

Saraiva: It's actually very simple. We're a business that requires a lot of information, a lot of data because our business is information -- intelligence information, and we need to do that in a cost-efficient manner. Part of that requires us to have the best technology. When we started four years ago, one of the most obvious patterns that we found was that we had a lot of fragmentation of our content acquisition processes where they were based, who was doing them, and more importantly, what processes they were following or not following.

Saraiva
The opportunity that we immediately saw was to consolidate it all, not just around the central capability, but into an optimal capability, with real experts around it making it work and effectively creating a platform as a service (PaaS) for our internal experts in each content area to perform their tasks just as usual, but faster, better, more reliably, and more consistently.

Fundamentally, we are a platform for web-content acquisition. And that is part of our content-shared platform because it's all part of a bigger picture, where we take content from so many sources and many different kinds of sources, and not just web.

Content management

I don't know the exact percentage, but I would guess that about half of what we do is content management, rather than site technology, per se. And a lot of those content management tasks are highly specialized because that's the only way we're going to add value. We're going to understand the content, where it comes from, what it means, and we are going to present it and structure it in the best possible way for our customers.

So, the needs of our internal groups and internal content teams are huge, very demanding, and very specialized. But they all have certain things in common. We found many of them were using Excel macros or some other technologies to perform their activities.

We tried to capture what was common, in spite of all that diversity, to leverage the best possible value from the technology that we have. But also, from our know-how, expertise, and best practices around how to source content, how to be compliant with the required rules, and producing consistent, high-quality data that we could trust, we could claim to our customers that they could trust our content because we know exactly what happened to it from beginning to the end.

Gardner: Thomson Reuters is a large company. Tell us how large, and tell us some numbers around the number of different units within the company that you are providing this data to.

Saraiva: We have about 50,000 employees worldwide in the majority of countries. For example, our news operations have reporters on the ground throughout the world.

We have all languages represented, both internally and in terms of our customers, and the content that we provide to our customers. We're a truly diverse organization.
It takes shape in the vast number of different teams we have specializing in one kind of content.

We have a huge number of individual groups organized around the types of customers that we serve. Are they global? Are they regional? Are they local? Are they large organizations? Are they small organizations? Are they hedge funds? Are they fund managers? Are they investment banks? Are they analysts? We have a variety of customers that we serve within each of our customer organizations around the world.

And that degree of specialty that I mentioned earlier, at some point, has to take shape. It takes shape in the vast number of different teams we have specializing in one kind of content. It may be, perhaps, just a language, French or Chinese. It may be fundamentals, versus real-time data. We have to have the expertise and the centers of excellence for each of those areas, so that we really understand the content.

Gardner: You had massive redundancy in how people would go about this task of getting information from the web. It probably was costly. When you decided that you wanted to create a platform and have a centralized approach to doing this, what were the decisions that you made around technology? What were some of the hurdles that you had to overcome?

Saraiva:  We were looking for a platform that we would be able to support and manage in a cost-effective manner. We were looking for something that we could trust and rely on. We were looking for something that our users could make sense of and actually be productive with. So, that was relatively simple.

The biggest challenge, in my opinion, from the start, was the fact that it's very hard to take a big organization with an inherently fragmented set of operating units and try to change it, because trying to introduce a single, central capability. It sounds great on paper, but when you start trying to persuade your users that there's value to them in in migrating their current processes, they'll be concerned that the change is not in their interest.

Demonstrating value

And there is a degree of psychology at work in trying to not only work with that reluctance that all businesses have to face, but also to influence it positively and try to demonstrate that value to our end users was far in excess to the threat that they perceived.

I can think of examples that are truly amazing, in my opinion. One is about the agility that we've gained through the introduction of technology such as this one, and not just the user of that technology, but the optimal use of it. Some time ago, before RSA was used in some departments, we had important customers who had an urgent, desperate need for a piece of information that we happened not to have, for whatever reason. It happens all the time.
We tried to politely explain that it might take us a while, because it would have to go through a development team that traditionally build C++ components. They were a small team and they were very busy. They had other priorities. Ultimately, that little request, for us, was a small part of everything we were trying to do. For that customer, it was the most important thing.

The conversation to explain why it was going to take so long why we were not giving them the importance that they deserved was a difficult conversation to have. We wanted to be better than that. Today, you can build a robot quickly. You can do it and plug it into the architecture that we have so that the customer can very quickly see it appearing almost real time in their product. That's an amazing change.
But ultimately, most importantly, we needed the confidence that we could get our job done.

Gardner: What was the story behind your adoption of this?

Saraiva: We spent some time looking at the technologies available. We spoke with a number of other customers and other people we knew. We did our own research, including a little bit of the shotgun kind of research that you tend to do on the Internet, trying to find what's available. Very quickly, we had a short list of five technologies or so.

All of them promised to be great, but ultimately, they had to pass the acid test, which was evaluation in terms of our technical operations experts. Is this something that we are able to run? And also in terms of the capabilities we were expecting. They were quite demanding, because we had a variety of users that we needed to cater to.

But ultimately, most importantly, we needed the confidence that we could get our job done. If we are going to invest in a given technology, we want to know that it can be used to solve a given kind of problem without too much fuss, complexity, or delay, because if that doesn't happen, you have a problem. You have only partially achieved the promise, and you will forever be chasing alternatives to fill that gap.

Kapow absolutely gives us that kind of confidence. Our developers, who at first had a little bit of skepticism about the ability of a tool to be so amazing, tried it. After the first robot, typically, their reaction was "Wow." They love it, because they know they can do their job. And that's what we all want. We want to be able to do our jobs. Our customers want to use our products to do their jobs. We're all in the same kind of game. We just need to be very, very good at what we do. Kapow gave us that.

Critically important

With Kapow, it was a straightforward process. We just click, follow the process that really mirrors a complex workflow in the flow chart that we designed, and the job is done.
In terms of the rapid development of the solutions, it was at least a reduction from several months to weeks. And this is typical. You have cases where it's much faster. You have cases where it's slower, because there are complex, high-risk automation processes that we need to take some time to test. But the development process is shortened dramatically.

Gardner: We were recently at the Kapow User Summit. We've been hearing about newer versions, the Kapow platform 9.2. Is there anything in particular that you've heard here so far that has piqued your interest? Something you might be able to apply to some of these problems right away?

Saraiva: A lot of what we've been doing and focusing on over the last four years was around a pattern whereby we have data flowing into the company, being processed and transformed. We're adding our value, and it's flowing out to our customers. There is, however, another type of web sourcing and acquisition that we're now beginning to work with which is more interactive. It's more about the unpredictable, unplanned need for information on demand.
The main advantage of a cloud-based service running Kapow would be in freeing us from the hassle of having to manage our own infrastructure.

There, interestingly, we have the problem of integrating the button that produces that fetch for data into the end-user workflows. That was something that was not possible with previous versions of Kapow or not straightforward. We would have to build our own interfaces, our own queues, and our own API to interface with the robo-server.

Now, with Kapplets it all looks very, very straightforward because we can easily see that we could have an arbitrary optimized workflow solution or tool for some of our users that happens to embed a Kapplet that allows a user to perform research on demand, perhaps on the customer, perhaps on a company for the kind of data that we wouldn't traditionally be acquiring data on a constant fixed basis.
 
Gardner: Any advice that you might offer to others who are grappling with similar issues around multiple data sources, not being able to use APIs, needing a synthetic API approach?
I've been amazed at what is possible with technologies such as Kapow.

Saraiva: I suppose the most important message I would want to share is about confidence in technology. When I started this, I had worked for years in technology, many of those years in web technology, some complex web technology. And yet, when I started thinking about web content acquisition, I didn't really think it could be done very well.

I thought this is going to be a challenge, which is partly the reason why I was interested in it. And I've been amazed at what is possible with technologies such as Kapow. So, my message would be don't worry that technology such as Kapow will not be able to do the job for you. Don't fear that you will be better off using your own bespoke C++ based solution. Go for it, because it really works. Go for it and make the most of it, because you will need it with so much data, especially on the Internet. You have to have that.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Kapow Software, a Kofax company.

You may also be interested in:

Tuesday, September 17, 2013

When real-time is no longer good enough, the predictive business emerges

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Cloud.

The next BriefingsDirect thought leadership discussion defines a momentous shift in business strategy. Join an SAP Cloud executive as we explore the impact that big data, cloud computing, and mobility are having in tandem on how businesses must act -- and react -- across their markets.

Explore how the agility goal of real-time responses is no longer good enough. What’s apparent across more business ecosystems is that businesses must do even better, to become so data-driven that they extend their knowledge and ability to react well into the future. In other words, we're now all entering the era of the predictive business.

To learn more about how heightened competition amid a data revolution requires businesses and IT leaders to adjust their thinking to anticipate the next, and the next, and the next moves on their respective chess boards, join Tim Minahan, the Chief Marketing Officer for SAP Cloud, and moderator Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: SAP Cloud is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: It’s hard to believe that the pace of business agility continues to accelerate. Tim, what’s driving this time-crunch? What are some of the changes afoot that require this need for -- and also enabling the capabilities to deliver on -- this notion of predictive business? We're in some sort of a rapid cycle of cause and effect, and it’s rather complicated.

Minahan: This is certainly not your father’s business environment. Big is no longer a guarantee to success. If you just look at the past 10 years, 40 percent of the Fortune 500 was replaced. So the business techniques and principles that worked 10, 5 or even three years ago are no longer relevant. In fact, they maybe a detriment to your business.

Minahan
Just ask companies like Tower Records, Borders Bookstore, or any of the dozens more goliaths that were unable or unwilling to adapt to this new empowered customer or to adapt new business models that threatened long-held market structures and beliefs.

The world, as you just said, is changing so unbelievably fast that the only constant is change. And to survive, businesses must constantly innovate and adapt. Just think about it. The customer today is now more connected and more empowered and more demanding.

You have one billion people in social networks that are talking about your brand. In fact, I was just reading a recent study that showed Fortune 100 companies were mentioned on social channels like Facebook, Twitter, and LinkedIn a total of 10.5 million times in one month. These comments are really shaping your brand image. They're influencing your customer’s views and buying decisions, and really empowering that next competitor.

But the consumer, as you know, is also mobile. There are more than 15 billion mobile devices, which is scary. There are twice as many smart phones and tablets in use than there are people on the planet. It’s changing how we share information, how we shop, and the levels of service that customers expect today.

It’s also created, as you stated, a heck of a lot of data. More data was created in the last 18 months than had been created since the dawn of mankind. That’s a frightening fact, and the amount of data on your company, on your consumer preferences, on buying trends, and on you will double again in the next 18 months.

Changing consumer

The consumer is also changing. We're seeing an emerging middle class of five billion consumers sprouting up in emerging markets around the world. Guess what? They're all unwired and connected in a mobile environment.

What's challenging for your business is that you have a whole new class of millennials entering the workforce. In fact, by next year, nearly half of the workforce will have been born after 1980 -- making me feel old. These workers just grew up with the web. They are constantly mobile.

These are workers that shun traditional business structures of command-and-control. They feel that information should be free. They want to collaborate with each other, with their peers and partners, and even competitors. And this is uncomfortable for many businesses.

For this always on, always changing world, as you said, real time just isn’t enough anymore. Knowing in real time that your manufacturing plant went down and you won’t be able to make the holiday shipping season -- it’s just knowing that far too late. Or knowing that your top customer just defected to your chief competitor in real time is knowing that far too late. Even learning that your new SVP of sales, who looks so great on paper, is an awful fit with your corporate culture or your go-to-market strategy is just knowing that far too late.

But to your point, what disrupts can also be the new advantage. So technology, cloud, social, big data, and mobile are all changing the face of business. The need is to exploit them and not to be disrupted by them.

Gardner: How does a predictive business create a whole greater than the sum of the parts when we think about this total shift going on?
Too often, we get enamored with the technology side of the story, but the biggest change that’s going to occur in business is going to be the culture change.

Minahan: I want to be clear here that the predictive business isn't just about advanced analytics. It’s not just about big data. That’s certainly a part of it, but just knowing something is going to happen, just knowing about a market opportunity or a pending risk just isn’t enough.

You have to have that capacity and insight to assess a myriad of scenarios to detect the right course of action, and then have the agility in your business processes, your organizational structures, and your systems to be able to adapt to capitalize on these changes.

Too often, we get enamored with the technology side of the story, but the biggest change that’s going to occur in business is going to be the culture change. There's  the need to adapt to this new millennial workforce and this new empowered customer and the need to reach this new emerging middle-class around the world.

In today’s fast-paced business world, companies really need to be able to predict the future with confidence, assess the right response, and then have the agility organizationally and systems-wise to quickly adapt their business processes to capitalize on these market dynamics and stay ahead of the competition.

They need to be able to harness the insights of disruptive technologies of our day, technologies like social, business networks, mobility, and cloud to become this predictive business.

Not enough

Gardner: Tim, you and I have been talking for several years now about the impact of cloud. We were also trying to be predictive ourselves and to extrapolate and figure out where this is going. I think it turns out that it’s been even more impactful than we thought.

Minahan: The original discussion was all about total cost of ownership (TCO). It was all about the cost benefits of the cloud. While the cloud certainly offers a cost advantage, the real benefit the cloud brings to business is in two flavors -- innovation and agility.
There's now the agility at the business level to configure new business processes without costly IT or consulting engagements.

You're seeing rapid innovation cycles, albeit incremental innovation updates, several times per year that are much more digestible for a company. They can see something coming, be able to request an innovation update, and have their technology partner several times a year adapt and deliver new functionality that’s immediately available to everyone.

Then there's now the agility at the business level to configure new business processes without costly IT or consulting engagements. With some of the more advanced cloud platforms, they can even create their own process extensions to meet the unique needs of their industry and their business.

You're already seeing examples of the predictive business in action across industries today. Leading companies are turning that combination of insight, the big data analytics, and these agile computing models and organizational structures into entirely new business models and competitive advantage.
Strategic marketing

Let’s just look at some of these examples. Take Cisco, where their strategic marketing organization not only mines historical data around what prompted people to buy, or what they have bought, and what were their profiles. They married that with real-time social media mentions to look for customers, ferret out customers, who reveal a propensity to buy and a high readiness to buy.

They then arm their sales team, push these signals out to their sales force, and recommended the right offer that would likely convert that customer to buy. That had a massive impact. They saw a sales uplift of more than $4 billion by bringing all of those activities together.

It’s not just in the high-tech sector. I know we talk about that a lot, but we see it in other industries like healthcare. Mount Sinai Hospital in New York examined the historical treatment approaches, survival rates, and the stay duration of the hospitals to determine the right treatments to optimize care and throughput a patient.

It constantly runs and adapts simulations to optimize its patients first 8-12 hours in the hospital. With improved utilization based on those insights and the ability to adapt how they're handling their patients, the hospital not only improved patient health and survival rates, but also achieved the financial effect of adding hundreds of new beds without physically adding one.

In fact, if you look at it, the whole medical industry is built on predictive business models using the symptoms of millions of patients to diagnose new patients and to determine the right courses of action.
So all around us, businesses are beginning to adapt and take advantage of these predictive business models.

Closer to home for you Dana, there is also an example of the predictive business, I don’t know if you've read Nate Silver's phenomenal book, "The Signal and the Noise," but he talks about going beyond Moneyball, and how the Boston Red Sox were using predictive systems that really have changed how baseball drafts rookie players.

The difference between Moneyball and rookies is that rookies don’t have a record in the pros. There's no basis from which to determine what their on-base percentage will be or how they will perform. But this predictive model goes beyond standard statistics here and looks at similar attributes of other professional players to determine who are the right candidates that they should be recruiting and projecting what their performance might be based on a composite of other players that have like-attributes.

Their first example of this on the Red Sox was with Dustin Pedroia, who no one wanted to recruit. They said he was too short, too slow, and not the right candidate to play second base. But using this new model, the Red Sox modeled him against previous players and found out some of the best second basemen in the world actually have similar attributes.

So they wanted to take him early in the draft. The first year, he took the rookie of the year title in 2007 and helped the Red Sox win the world series for only the second time, since 1918. He's gone on to win the MVP the following year, and he’s been a top all star performer ever since.

So all around us, businesses are beginning to adapt and take advantage of these predictive business models.

Change in thinking

Gardner: It's curious that when you do take a data-driven approach, you have to give up some of the older approaches around intuition, gut instinct, or some of the metrics that used to be important. That really requires you to change your thinking and, rather than go to the highest paid person’s opinion when you need to make a decision, it's really now becoming more of a science.

So what do you get Tim when you do this correctly? What do businesses get when they become more data-driven, when they adjust their culture, take advantage of some of the new tools, and recognize the shift, the consumer behavior? How impactful can this be?

Minahan: It can be tremendously impactful. We truly believe that you get a whole new world of business. You get a business model and organizational and systems infrastructure that has the ability to adapt to all the massive transformation and the rapid changes that we discussed earlier. We believe the predictive business will transform every function within the enterprise and across the value chain.

Just think of sales and marketing. Sales and marketing professionals will now be empowered to engage customers like never before by tapping into social activity, buying activity on business networks, and geo-location insights to identify prospects and develop optimal offers and engage and influence perspective customers right at the point of purchase.
It can be tremendously impactful. We truly believe that you get a whole new world of business.

I think of pushing offers, coupons, to mobile devices of prospective buyers based on their social finger print and their actual physical location or service organizations. We talk about this Internet of things. We haven’t even scratched a surface on this, but they can massively drive customer satisfaction and loyalty to new levels by predicting and proactively resolving potential product or service disruption even before they happen.

Think about your device being able to send a signal and demonstrate a propensity to break down in the future. It may be possible to send a firmware update to fix it without your even knowing.

That’s the power that we’ve already seen with this type of thing in the supply chain. Procurement, logistics and supply chain teams are now being alerted to potential future risks in their sub-tier supply chains and being guided to alternative suppliers based on optimal resolutions and community-generated ratings and buying patterns of like buyers on a business network. We've talked about that in the past.

We really believe that the future of business is the predictive business. The predictive business is not going to be an option going forward. It's not a luxury. It will be what's required not only to win, but eventually, to survive. Your customers are demanding it, your employees are requiring it, and your livelihood is going to depend on it.

The need to adapt

Gardner: Given there is so much complexity, so many moving parts, to take into account, how can larger organizations start to evolve to be predictive?

Minahan: Number one is that you can't have the fear of change. You need to set that aside. At the outset of this discussion, we talked about changes all around us, whether it's externally, with the new empowered consumer who is more informed and connected than ever before, or internally with a new millennial workforce that’s eager to look at new organizational structures and processes and collaborate more, not just with other employees but their peers, and even competitors, in new ways.

That's number one, and probably the hardest thing. On top of that, this isn't just a single technology role. You need to be able to embrace a lot of the new technologies out there. When we look at one of the attributes of an enabling platform for the predictive business, it really comes down to a few key areas.
You have assess multiple scenarios and determine the best course of action faster than ever before.

You need the convenience and the agility of the cloud, improved IT resources and use basically everything as a service -- apps, infrastructure, and platform. You can dial up the capabilities, processing power, or the resource that you need, quickly configure and adapt your business processes at the business level, without massive IT or consulting engagements. Then, you have to have the agility to use some of these new-age cloud platforms to create your own and differentiated business processes and applications.

The second thing is that it's critically important to gather those new insights and productivity, not just from social networks but from business networks, with new rich data sources, from real time market and customer sentiments, through social listening and analytics, the countless bits and histories of transactional and relationship data available on robust business networks.

Then, you have to manage all of this. You also need to advance your analytical capabilities. You need the power and speed of big data, in-memory analytics platforms, and exploiting new architectures like Hadoop and others to enable companies to aggregate, correlate and assess just countless bits of information that are available today and doubling every 18 months.

You have assess multiple scenarios and determine the best course of action faster than ever before. Then, ultimately, one of the major transformational shifts, which is also a big opportunity, is that you need to be able to assess and deliver with ease all of this information to mobile devices.

This is true whether it's your employees who can engage in a process and get insights where they are in the field or whether it's your customer you need to reach, either across the street or halfway around the globe. So the whole here is greater than the sum of the parts. Big data alone is not enough. Cloud alone is not enough. You need all of these enabling technologies working together and leveraging each other. The next-generation business architecture must marry all of these capabilities to really drive this predictive business.

Next generation

Gardner: So clearly at SAP Cloud, you will be giving us a lot of thought. I think you appreciate the large dimension of this, but also the daunting complexity that’s faced in many companies. I hope in our next discussion, Tim, we can talk a little bit about some of the ideas you have about what the next generation of business services platform and agility capability that gets you into that predictive mode would be. Maybe you could just give us a sense very quickly now about the direction and role that an organization like SAP Cloud would play?

Minahan: SAP, as you know, has had a history of helping business continually innovate and drive this next wave of productivity and unlock new value and advantage for the business. The company is certainly building to be this enabling platform and partner for this next wave of business. It's making the right moves both organically and otherwise to enable the predictive business.

If you think about the foundation we just went through and then marry it up against, where SAP is invested and innovated, it's now the leading cloud provider for businesses. More business professionals are using cloud solutions from SAP than from any other vendor.
The company is certainly building to be this enabling platform and partner for this next wave of business.

It's leapt far ahead in the world of analytics and performance with the next generation in-memory platform in HANA. It's the leader in mobile business solutions and social business collaboration with Jam, and as we discussed right here on your show, it now owns the world’s largest and most global business network with the acquisition of Ariba.

That’s more than 1.2 million connected companies transacting over half a trillion dollars worth of commerce, and a new company joining every two minutes to engage, connect, and get more informed to better collaborate. We're very, very excited about the promise of the predictive business and SAPs ability to deliver and innovate on the platform to enable it.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Cloud.

You may also be interested in:

Thursday, September 12, 2013

Thought leader interview: HP's global CISO Brett Wahlin on the future of security and risk

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP. Follow the HP Protect 2013 activities next week, Sept. 16-19.

Join HP’s Chief Information Security Officer (CISO) to learn about how some of the very largest global enterprises like HP are exploring all of their options for doing business safely and continuously.

Brett Wahlin, Vice President and Global CISO at HP, is the next thought leadership guest interview on the HP Discover Performance Podcast Series.

At HP for approximately eight months, Wahlin previously put the security in place after the infamous PlayStation breach while he was the chief security officer (CSO) at Sony Network Entertainment. Prior to that, he was the CSO at McAfee, after a stint as CSO at Los Alamos Laboratory. Years ago, Wahlin got his start doing counterintelligence for the US Army during the Cold War.

Wahlin is interviewed by Paul Muller, Chief Software Evangelist at HP Software, and Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: There's been a lot of discussion about security and a lot of discussion about big data. I'm curious as to how these are actually related.

Wahlin: Big data is quite an interesting development for us in the field of security. If we look back on how we used to do security, trying to determine where our enemies were coming from, what their capacities were, what their targets were, and how we're gathering intelligence to be able to determine how best to protect the company, our resources were quite limited.

Wahlin
We've found that through the use of big data, we're now able to start gathering reams of information that were never available to us in the past. We tend to look at this almost in a modern-warfare type of perspective.

If you're a battlefield commander, and you're looking at how to deploy defenses, how would you deploy those offenses, and what would be the targets that your enemies are looking for? You typically then look at gathering intelligence. This intelligence comes through multiple sources, whether it's electronic or human signals, and you begin to process the intelligence that's gathered, looking for insights into your enemy.

Moving defenses

This could be the enemy’s capabilities, motivation, resourcing, or targets. Then, by that analysis of that intelligence, you can go through a process of moving your defenses, understanding where the targets may be, and adjusting your troops on the ground.

Big data has now given us the ability to collect more intelligence from more sources at a much more rapid pace. As we go through this, we're looking at understanding these types of questions that we would ask as if we were looking at direct adversaries.

We're looking at what these capabilities are, where people are attacking from, why they're attacking us, and what targets they're looking for within our company. We can gather that data much more rapidly through the use of big data and apply these types of analytics.

We begin to ask different questions of the data and, based on the type of questions we're asking, we can come up with some rather interesting information that we never could get in the past. This then takes us to a position where that advanced analytics allows us to almost predict where an enemy might hit.

That’s in the future, I believe. Security is going from the use of prevention, where I'm tackling a known bad thing, to the point where I can use big data to analyze what's happening in real time and then predict where I may be attacked, by whom, and at what targets. That gives me the ability to move the defenses around in such a way that I can protect the high-value items, based on the intelligence that I see coming in through the analytics that we get out of big data.

Muller
Muller: Brett, you talk a lot about the idea of getting in front of the problem. Can you talk a little bit about your point of view on how security, from your perspective as a practitioner, has evolved over the last 10-15 years?

Wahlin: Certainly. That’s a great question. Years ago, we used to be about trying to prevent the known bad from happening. The questions we would ask would always be around, can it happen to us, and if it does, can we respond to it? What we have to look at now is the fact that the question should change. It should be not, "Can it happen to us," but "When is it going to happen to us?" And not, "Can we respond to it," but "How can we survive it?"

If we look at that type of a mind-shift change, that takes us back to the old ways of doing security, where you try to prevent, detect, and respond. Basically, you prevented the known bad things from happening.

This went back to the days of -- pick your favorite attack from years ago. One that I remember is very telling. It was Code Red, and we weren’t prepared for it. It hit us. We knew what the signature looked like and we were able to stop it, once we identified what it was. That whole preventive mechanism, back in the day, was pretty much what people did for security.

Fast forward several years, and you get into that new era of security threats highlighted by attacks like Aurora, when it came out. Suddenly, we had the acronyms that flew all over, such as APT -- advanced persistent threats -- and advanced malware. Now, we have attacks that you can't prevent, because you don’t know them. You can't see them. They're zero-days. They're undiscovered malware that’s in your system already.

Detect and respond

That changed the way we moved our security. We went from prevent to a big focus on not just preventing, because that becomes a hygiene function. Now, we move in to detect-and-respond view, where we're looking for anomalies. We're looking for the unknown. We're beefing up the ability to quickly respond to those when we find them.

The evolution, as we move forward, is to add a fourth dimension to this. We prevent, detect, respond, and predict. We use elements like big data to understand not only how to get situational awareness, where we connect the dots within our environment, but taking it one step further and being able to predict where that next stop might land. As we evolve in this particular area, getting to that point where we can understand and predict will become a key capability that security departments must have in future.

Gardner: A reminder to our audience, don't forget to follow the HP Protect 2013 conference activities next week, Sept. 16-19.

As I hear you talking about getting more data, being proactive, and knowing yourself as an organization, Brett, it sounds quite similar to what we have been hearing for many years from the management side, to know yourself to be able better maintain performance standards and therefore be able to quickly remediate when something went wrong.

Are we seeing a confluence between good IT management practices and good security practices, and should we still differentiate between the two?
One of the elements that we look at, of course, is how to add all this additional complexity and additional capability into security and yet still continue to drive value to the business and drive costs out

Wahlin: As we move into the good management of IT, the good management of knowing yourself, there's a hygiene element that appears within the correlation end of the security industry. One of the elements that we look at, of course, is how to add all this additional complexity and additional capability into security and yet still continue to drive value to the business and drive costs out. So we look for areas of efficiencies and again we will draw many similarities.

As you understand the managing of your environments and knowing yourself, we'll begin to apply known standards that we'll really use in the governance perspective. This is where you will take your hygiene, instead of looking at a very elaborate risk equations. You'll have your typical "risk equals threat times vulnerability times impact," and what are my probabilities.

Known standards

It gets very confusing. So we're trying to cut cost out of those, saying that there are known standards out there. Let's just use them. You can use the ISO 27001, NIST 800-53, or even something like a PCI DSS. Pick your standard, and that then becomes the baseline of control that you want to do. This is knowing yourself.

With these controls, you apply them based on risk to the company. Not all controls are applied equally, nor should they be. As you apply the control based on risk, there is evaluation assessment. Now, I have a known baseline that I can measure myself against.

As you began to build that known baseline, did you understand how well you're doing from a hygiene perspective? These are all the things that you should be doing that give you a chance to understand what your problem areas are.

As you begin to understand those metrics, you can understand where you might have early-warning indicators that would tell you that that you might need to pay attention to certain types of threats, risks, or areas within the company.
There are two types of organizations -- those that have been hacked and those that know they're being hacked.

There are a lot of similarities as you would look at the IT infrastructures, server maintenance, and understanding of those metrics for early warnings or early indicators of problems. We're trying to do the same security, where we make it very repeatable. We can make it standards-based and we can then extend that across the company, of course always being based on risk.

Muller: There is one more element to that, Dana, such as the evolution of IT management through, say, a framework like ITIL, where you very deliberately break down the barriers between silos across IT.

Similarly, I increasingly find with security that collaboration across organizations -- the whole notion of general threat intelligence – forms one of the greatest sources of potential intelligence about an imminent threat. That can come from the operational data, or a lot of operational logs, and then sharing that situational awareness between the operations team is powerful.

At least this works in the experience that I have seen with many of our clients as they improve security outcomes through a heightened sense of what's actually going on, across the infrastructure with customers or users.

One of the greatest challenges we have in moving through Brett’s evolution that he described is that many executives still have the point of view that I have a little green light on my desktop, and that tells me I don’t have any viruses today. I can assume that my organization is safe. That is about as sophisticated a view of security as some executives have.

Increased awareness

Then, of course, you have an increasing level of awareness that that is a false sense of security, particularly in the financial services industry, and increasingly in many governments, certainly national government. Just because you haven't heard about a breach today, that doesn’t mean that one isn't actually either being attempted or is, in fact, being successful.

One of the great challenges we have is just raising that executive awareness that a constant level of vigilance is critical. The other place where we're slowly making progress is that it's not necessarily a bad thing to share negative experiences.
We have to understand which ones of these we need to pay attention to and have the ability to not only correlate amongst ourselves at the company, but correlate across an industry.

Wahlin: Absolutely. We look at the inevitability of the fact that networks are penetrated, and they're penetrated on a daily basis. There's a difference between having unwanted individuals within your network and having the data actually exfiltrated and having a reportable breach.

As we understand what that looks like and how the adversaries are actually getting into our environment, that type of intelligence sharing typically will happen amongst peers. But the need for the ability to actually share and do so without repercussions is an interesting concept. Most companies won't do it, because they still have that preconceived notion that having somebody in your environment is binary -- either my green light is on, and it's not happening, or I've got the red light on, and I've got a problem.

In fact, there are multiple phases of gray that are happening in there, and the ability to share the activities, while they may not be detrimental, are indicators that you have an issue going on and you need to be paying attention to it, which is key when we actually start pointing intelligence.

I've seen these logs. I've seen this type of activity. Is that really an issue I need to pay attention to or is that just an automated probe that’s testing our defenses? If we look at our environment, the size of HP and how many systems we have across the globe, you can imagine that we see that type of activity on a second-by-second basis.

We have to understand which ones of these we need to pay attention to and have the ability to not only correlate amongst ourselves at the company, but correlate across an industry.

HP may be attacked. Other high-tech companies may also be attacked. We'll get supply-chain attacks. We look at various types of politically motivated attacks. Why are they hitting us? So again, it's back to the situational awareness. Knowing the adversary and knowing their motivations, that data can be shared. Right now, it's usually in an ad-hoc way, peer-to-peer, but definitely there's room for some formalized information sharing.

Information sharing

Muller: Especially when you consider the level of information sharing that goes on in the cybercrime world. They run the equivalent of a Facebook almost. There is a huge amount of information sharing that goes on in that community. It's quite well structured. It's quite well organized. It hasn’t necessarily always been that well organized on the defense side of the equation. I think what you're saying is that there's opportunity for improvement.

Wahlin: Yes, and as we look at that opportunity, the counterintelligence person in me always has to stand up and say, "Let's make sure that we're sharing it and we understand our operational security, so that we're sharing that in a way that we're not giving away our secrets to our adversaries." So while there is an opportunity, we also have to be careful with how we share it.

Muller: You, of course, wind up in the situation where you could be amplifying bad information as well. If you were paranoid enough, you could assume that the adversary is actually deliberately planting some sort of distraction at one corner of the organization in order to get to everybody focused on that, while they quietly sneak in through the backdoor.

Wahlin: Correct.

Gardner: Brett, returning to this notion of actionable intelligence and the role of big data as an important tool, where do you go for the data? Is it strictly the systems, the systems log information? Is there an operational side to that that you tap more than the equipment, more than the behaviors? What are the sources of data that you want to analyze in order to be better at security?
Let's make sure that we're sharing it and we understand our operational security, so that we're sharing that in a way that we're not giving away our secrets to our adversaries.

Wahlin: The sources that we use are evolving. We have our traditional sources, and within HP, there is an internal project that is now going into alpha. It's called Project HAVEn and that’s really a combination of ArcSight, Vertica, and Autonomy, integrating with Hadoop. As we build that out and figure out what our capabilities are to put all this data into a large collection and being able to ask the questions and get actionable results out of this, we begin to then analyze our sources.

Sources are obvious as we look at historical operation and security perspective. We have all the log files that are in the perimeter. We have application logs, network infrastructure logs, such as DNS, Active Directory, and other types of LDAP logs.

Then you begin to say, what else can we throw in here? That’s pretty much covered in a traditional ArcSight type of an implementation. But what happens if I start throwing things such as badge access or in-and-out card swipes? How about phone logs? Most companies are running IP phone. They will have logs. So what if I throw that in the equation?

What if I go outside to social media and begin to throw things such as Twitter or Facebook feeds into this equation? What if I start pulling in public searches for government-type databases, law enforcement databases, and start adding these? What results might I get based on all that data commingling?

We're not quite sure at this point. We've added many of these sources as we start to look and ask questions and see from which areas we're able to pull the interesting correlations amongst different types of data to give us that situational awareness.

There's still much to be done here, much to be discovered, as we understand the types of questions that we should be asking. As we look at this data and the sources, we also look at how to create that actionable intelligence.

Disparate sources

The type of analysts that we typically use in a security operations center are very used to ArcSight. I ingest the log and I see correlations. They're time-line driven. Now, we begin to ask questions of multiple types of data sources that are very disparate in their information, and that takes a different type of analyst.

Not only do we have different types of sources, but we have to have different types of skill sets to ask the right questions of those sources. This will continue to evolve. We may or may not find value as we add sources. We don’t want to add a source just for the heck of it, but we also want to understand that we can get very creative with the data as it comes together.

Muller: There are actually two things that I think are important to follow up on here. The first is that, as it's true of every type of analytics conversation I am having today, everyone talks about the term "data scientist." I prefer the term "data artist," because there's a certain artistry to working out what information feeds I want to bring in.

The other element is that, once we've got that information, one of the challenges is that we don’t want to add to the overhead or the burden of processing that information. So it's being able to increasing apply intelligence to, as Brett talked about, mechanistic patterns that you can determine with traditional security information. Event management solutions are rather mechanistic. In other words, you apply a set of logical rules to them.
When you're looking at behavioral activities, rules may not be quite as robust as looking at techniques such as information clustering.

Increasingly, when you're looking at behavioral activities, rules may not be quite as robust as looking at techniques such as information clustering, where you look for hotspots of what seem like unrelated activities at first, but turn out later to be related.

There's a whole bunch of science in the area of crime investigation that we've applied to cybercrime, using some of the techniques, Autonomy for example, to uncover fraud in the financial services market. That automation behind those techniques increasingly is being applied to the big-data problem that security is starting to deal with.

Gardner: You were describing this opportunity to bring so much different information together, but you also might have unintended consequences. Have you plumbed that at all?

Wahlin: Yes. As we further evaluate these data sources and the ability to understand, I believe that the insight into using the big data, not only for security, but as more of a business intelligence (BI) type of perspective has been well-documented. Our focus has really been on trying to determine the patterns and characteristics of usage.

Developing patterns

While we look at it from a purely security mindset, where we try to develop patterns, it takes on a counter-intelligence way of understating how people go, where people go, and what do they do. As people try to be unique, they tend to fall into patterns that are individual and specific to themselves. Those patterns may be over weeks or months, but they're there.

Right now, a lot of times, we'll be asked as a security organization to provide badge swipes as people go in and out of buildings. Can we take that even further and begin to understand where the efficiency would come in based on behaviors and characteristics with workforces. Can we divide that into different business units or geography to try to determine the best use of limited resources across companies? This data could be used in those areas.

The unintended consequence that you brought up, as we look at this and begin to come up with patterns of individuals, is that it begins to reveal a lot about how people interact with systems -- what systems they go to, how often they do things -- and that can be used in a negative way. So there are privacy implications that come right to the forefront as we begin to identify folks.

That that will be an interesting discussion going forward, as the data comes out, patterns start to unfold, patterns become uniquely identifiable to cities, buildings, and individuals. What do we do with those unintended consequences?
There are always situations where any new technology or any new capability could ultimately be used in a negative fashion.

It's almost going to be sort of a two-step, where we can make a couple of steps forward in progress and technology, then we are going to have to deal with these issues, and it might take us a step back. It's definitely evolving in this area, and these unintended consequences could be very detrimental if not addressed early.

We don’t want to completely shut down these types of activities based on privacy concerns or some other type of legalities, when we could actually potentially solve for those problems in a systematic perspective, as we move forward with the investigation of the usage of those technologies.

Muller: The question we always need to bear in mind here is, as Brett talks about it, what are the potential unintended consequences? How can we get in front of those potential misuses early? How can we be vigilant of those misuses and put in place good governance ahead of time?

There are three approaches. One is to bury your head in the send and pretend it will never happen. Second is to avoid adopting a technology at all for fear of those unintended consequences. The third is to be aware of them and be constantly looking for breaches of policy, breaches of good governance, and being able to then correct for those if and when they do occur.

Closed-loop cycle

Gardner: What is HP is doing that will set the stage and perhaps help others to learn how to get started in terms of better security and better leveraging of big data as a tool for better security?

Wahlin: As HP progresses into the predicted security front, we're one of, I believe, two companies that are actually trying to understand how to best use HAVEn as we begin the analytics to determine the appropriate usage of the data that is at our fingertips. That takes a predictive capability that HP will be building.
The lagging piece of this would be the actual creation of agile security.

We've created something called the Cyber Intelligence Center. The whole intent of that is to develop the methodologies around how the big data is used, the plumbing, and then the sources for which we actually create the big data and how we move logs into big data. That's very different than what we're doing today, traditional ArcSight loggers and ESMs. There are a lot of mechanics that we have to build for that.

Then, as we move out of that, we begin to look at the actual actionable intelligence creation to use the analytics. What questions should we ask? Then, when we get the answer, is it something we need to do something about? The lagging piece of this would be the actual creation of agile security. In some places, we even call it mobile security, and it's different than mobility. It's security that can actually move.

If you look at the war-type of analogies, back in the day, you had these columns of men with rifles, and they weren’t that mobile. Then, as you got into mechanized infantry and other types of technologies came online, airplanes and such, it became much more mobile. What's the equivalent to that in the cyber security world, and how do we create that.

Right now, it's quite difficult to move a firewall around. You don’t just unplug or re-VLAN a network. It's very difficult. You bring down applications. So what is the impact of understanding what's coming at you, maybe tomorrow, maybe next week? Can we actually make a infrastructure such that it can be reconfigured to not only to defend against that attack, but perhaps even introduce some adversarial confusion.

I've done my reconnaissance. It looks like this. I come at it tomorrow, and it looks completely different. That is the kill chain that will set back the adversary quite a bit, because most of the time, during a kill chain, it's actually trying to figure out where am I, what I have, where the are assets located, and doing reconnaissance through the network.

So there are a lot of interesting things that we can do as we come to this next step in the evolution of security. At HP, we're trying to develop that at scale. Being the large company that we are, we get the opportunity to see an enormous amount of data that we wouldn’t see if we are another company.

Numerous networks

Gardner: Paul, it almost sounds as if security is an accelerant to becoming a better organization, a more data-driven organization which will pay dividends in many ways.

Muller: I completely agree with you. Information security and the arms race, quite literally the analogy, is a forcing function for many organizations. It would be hard to say this without a sense of chagrin, but the great part about this is that there are actually technologies that are being developed as a result of this. Take ArcSight Logo as an example, as a result of this arms race.
Just as the space race threw up a whole bunch of technologies like Teflon or silicon adhesives that we use today, the the security arms race is generating some great byproducts.

Those technologies can now be applied to business problems, gathering real-time operational technology data, such as seismic events, Twitter feeds, and so forth, and being able to incorporate those back in for business and public-good purposes. Just as the space race threw up a whole bunch of technologies like Teflon or silicon adhesives that we use today, the the security arms race is generating some great byproducts that are being used by enterprises to create value, and that’s a positive thing.

Wahlin: The analogy of the space race is perfect, as you look at trying to do the security maturation within an environment. You begin to see that a lot of the things that we're doing, whether it's understanding the environment, being able to create the operational metrics around an environment, or push into the fact that we've got to get in front of the adversaries to create the environment that is extremely agile is going to throw off a lot of technology innovations.

It’s going to throw off some challenges to the IT industry and how things are put together. That’s going to force typically sloppy operations -- such as I am just going to throw this up together, I am not going to complete an acquisition, I don’t document, I don't understand my environmental -- to clean it up as we go through those processes.

The confusion and the complexity within an environment is directly opposed to creating a sense of security. As we create the more secure environment, environments that are capable of detecting anomalies within them, you have to put the hygienic pieces in place. You have to create the technologies that will allow you to leapfrog the adversaries. That’s definitely going to be both a driver for business efficiencies, as well as technology, and innovation as it comes down.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP. Follow the HP Protect 2013 activities next week, Sept. 16-19.

You may also be interested in: