Friday, December 6, 2013

As big data pushes enterprises into seeking more data types, standard and automated integrations far outweigh coded connections, says panel

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Scribe Software.

Creating big-data capabilities and becoming a data-driven organization are near the tops of surveys for the most pressing business imperatives as we approach 2014.

These improved business-intelligence (BI) trends are requiring better access and automation across data flows from a variety of sources, formats, and from many business applications.

The next BriefingsDirect panel discussion then focuses on ways that enterprises are effectively harvesting data in all its forms, and creating integration that fosters better use of big data throughout the business process lifecycle.

Here now to share their insights into using data strategically by exploiting all of the data from all of the applications across business ecosystems, we’re joined by Jon Petrucelli, Senior Director of the Hitachi Solution Dynamics, CRM and Marketing Practice, based in Austin, Texas; Rick Percuoco, Senior Vice President of Research and Development at Trillium Software in Bedford, Mass., and Betsy Bilhorn, Vice President of Product Management at Scribe Software in Manchester NH.

The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Scribe Software is a sponsor of BriefingsDirect podcasts.]

Here are some edited excerpts:
Gardner: Big-data analytics platforms have become much more capable, but we still come back to the same problem of getting to the data, putting it in a format that can be used, directing it, managing that flow, automating it, and then, of course, dealing with the compliance, governance, risk, and security issues.

Is that the correct read on this, that we've been able to move quite well in terms of the analytics engine capability, but we're still struggling with getting the fuel to that engine?

Bilhorn: I would absolutely agree with that. When we talk about big data, big analytics and all of that, it's moved much faster than capturing those data sources. Some of these systems that we want to get the data from were never built to be open. So there is a lot of work just to get them out of there.

Bilhorn
The other thing a lot of people like to talk about is an application programming interface (API) economy. "We will have an API and we can get through web services at all this great stuff," but what we’ve seen in building a platform ourselves and having that connectivity, is that not all of those APIs are created equal.

The vendors who are supplying this data, or these data services, are kind of shooting themselves in the foot and making it difficult for the customer to consume them, because the APIs are poorly written and very hard to understand, or they simply don’t have the performance to even get the data out of the system.
The vendors who are supplying this data, or these data services themselves, are kind of shooting themselves in the foot and making it difficult for the customer to consume them.

On top of that, you have other vendors who have certain types of terms of service, where they cut off the service or they may charge you for it. So when they talk about how it's great that they can do all these analytics, in getting the data in there, there are just so many show stoppers on a number of fronts. It's very, very challenging.

Gardner: Customer relationship management (CRM), I imagine, paved the way where we’re trying to get a single view of the customer across many different data type of activities. But now, we’re pushing the envelope to a single view of the patient across multiple healthcare organizations or a single view of a process that has a cloud part, an on-premises part, and an ecosystem supply-chain part.

It seems as if we’ve moved in more complexity here. Jon Petrucelli, how are the systems keeping up with these complex demands, expanding concentric circles of data inclusion, if you will?

Petrucelli: That’s a huge challenge. We see integration as critical at the high levels of adoption and return on investment (ROI). Adoption by the users and then ultimately ROI by the businesses is important, because integration is like gas in the sports car. Without the gas, it's not going to go.

Petrucelli
What we do for a lot of customers is intentionally build integration using Scribe, because we know that if we can take them down from five different interfaces, you're looking at getting a 360-degree view of the customer that’s calling them or that they’re about to call on. We can take that down to one interface from five.

We want to give them one user experience or one user interface to productive users -- especially sales reps in the CRM world and customer service reps. You don’t want them all tabbing between a bunch of different systems. So we bring them into one interface, and with a platform like Microsoft CRM, they can use their interface of choice.

They can move from a desktop, to a laptop, to a tablet, to a mobile device and they’re seeing one version of the truth, because they’re all looking into windows looking into the same realm. And in that realm, what is tunneled in comes through pipes that are Scribe.

They’re really going to like that. Their adoption is going to be higher and their productivity is going to be higher. If you can raise the productivity of the users, you can raise the top line of the company when you’re talking about a sales organization. So, integration is the key to drive high level of adoption and high level of ROI and high levels of productivity.

We used to do custom software integration. With a lot of our customers we see lot of custom .NET code or other types of codesets, Java for example, that do the integration. They used to do that, and we still see some bigger organizations that are stuck on that stuff. That’s a way to paint yourself into a corner and make yourself captive to some developer.

Percuoco: You do have to watch out for custom APIs. Trillium has a connectivity business as does Scribe.

As long as you stick with industry-standard handshaking methods, like XML or JSON or web services and RESTful APIs, then usually you can integrate packages fairly smoothly. You really need to make sure that you're using industry-standard hand-offs for a lot of the integration methods. You have four or five different ways to do that, but it’s pretty much the same four or five.

Petrucelli: We highly recommend that people move away from that and go to a platform-based middleware application like Scribe. Scribe is our preferred platform middleware, because that makes it much more sustainable and changeable as you move forward. Inevitably, in integration, someone is going to want to change something later on.

When you have a custom code integration someone has to actually crack open that code, take it offline, or make a change and then re-update the code and things like -- and its all just pure spaghetti code.
We highly recommend that people move away from that and go to a platform-based middleware application like Scribe.

With a platform like Scribe, its very easy to pick up industry-standard training available online. You’re not held hostage anymore. It’s a graphical user interface (GUI). It's literally drag-and-drop mappings and interlock points. That’s really amazing, being this nice capability in their Scribe Online service. Even children can do an integration. It’s a teaching technique that was developed at Harvard or MIT about how to put puzzle pieces together through integration. If it doesn’t work, the puzzle pieces don’t fit.

They’ve done a really amazing job of making integration for rest of us, not just for developers. We highly recommend people to take a look at that, because it just brings the power back to the business and takes it away from just one developer, a small development shop, or an outsourced developer.

Gardner: What else has been holding businesses back from gaining access to the most relevant data?

Bilhorn: One is the explosion in the different types and kinds of data. Then, you start mixing that with legacy systems that have always been somewhat difficult to get to. Bringing those all together and making sense of that are the two biggest ones. Those have been around for a long, long time.

That problem is getting exponentially harder, given the variety of those data sources, and then all the different ways to get into those. It’s just trying to put all that together. It just gets worse and worse. When most people look at it today, it almost seems somewhat insurmountable. Where do you even start?

Legacy systems

Petrucelli: We work with a lot of large enterprise, global-type customers. To build on what Betsy said, they have a lot of legacy systems. There's a lot of data that’s captured inside these legacy systems, and those systems were not designed to be open architected, with sharing their data with other systems.

When you’re dealing with modern systems, it's definitely getting easier. When you deal with middleware software like Scribe, especially with Scribe Online, it gets much easier. But the biggest thing that we encounter in the field with these larger companies is just a lack of understanding of the modern middleware and integration and lack of understanding of what the business needs. Does it really need real-time integration?

Some of our customers definitely have a good understanding of what the business wants and what their customers want, but usually the evaluator, decision-maker, or architect doesn’t have a strong background in data integration.

It's really a people issue. It's an educational issue of helping them understand that this isn't as hard as they think it is. Let's scope it down. Let's understand what the business really needs. Usually, that becomes something a lot more realistic, pragmatic, and easier to do than they originally anticipated going into the project.

In the last 5 to 10 years, we've seen data integration get much easier to do, and a lot of people just don’t understand that yet. That’s the lack of understanding and lack of education around data integration and how to exploit this big-data proliferation that’s happening. A lot of users don't quite understand how to do that, and that’s the biggest challenge. It’s the people side of it. That’s the biggest challenge for us.

Gardner: Rick Percuoco at Trillium, tell us what you are seeing when it comes to the impetus for doing data integration. Perhaps in the past, folks saw this as too daunting and complex or involved skill sets that they didn't have. But it seems now that we have a rationale for wanting to have a much better handle on as much data as possible. What's driving the need for this?

Percuoco: Certain companies, by their nature, deal with volume data. Telecom providers or credit card companies are being forced into building these large data repositories because the current business needs would support that anyway.

Percuoco
So they’re really at the forefront of most of these. What we have are large data-migration projects. There are disparate sources within the companies, siloed bits of information that they want to put into one big-data repository.

Mostly, it's used from an analytics or BI standpoint, because now you have the capability of using big-data SQL engines to link and join across disparate sources. You can ask questions and get information, mines of information, that you never could before.

The aspect of extract, transform, load (ETL) will definitely be affected with the large data volumes, as you can't move the data like you used to in the past. Also, governance is becoming a stronger force within companies, because as you load many sources of data into one repository, it’s easier to have some kind of governance capabilities around that.

Higher scales

Trillium Software has always been a data-quality company. We have a fairly mature and diverse platform for data that you push through. Because for analytics, for risk and compliance, or for anything where you need to use your data to calculate some kind of risk quotient ratios or modeling whereby you run your business, the quality of your data is very, very important.
With the advent of big data and the volume of more and varied unstructured data, the problem of data quality is like on steroids now.

If you’re using that data that comes in from multiple channels to make decisions in your business, then obviously data quality and making that data the most accurate that it can be by matching it against structured sources is a huge difference in terms of whether you'll be making the right decisions or not.

With the advent of big data and the volume of more and varied unstructured data, the problem of data quality is on steroids now. You have a quality issue with your data. If anybody who works in any company is really honest with themselves and with the company, they see that the integrity of the data is a huge issue.

As the sources of data become more varied and they come from unstructured data sources like social media, the quality of the data is even more at risk and in question. There needs to be some kind of platform that can filter out the chatter in social media and the things that aren't important from a business aspect.

Gardner: Betsy Bilhorn, tell us about Scribe Software and how what Trillium and Hitachi Solutions are doing helps data management.

Bilhorn: We look at ourselves as the proverbial PVC pipe, so to speak, to bring data around to various applications and the business processes and analytics. Where folks like Hitachi leverage our platform is in being able to make that process as easy and as painless as possible.

We want people to get value out of their data, increase the pace of their business, and increase the value that they’re getting out of their business. That shouldn’t be a multi-year project. It shouldn’t be something that you’re tearing your hair out over and running screaming off a bridge.

As easy as possible

Our goal here at Scribe is to make that data integration and to get that data where it needs to go, to the right person, at the right time, as easily and simply as possible for companies like Hitachi and their clients.

Working with Trillium, one of the great things with that partnership is obviously that there is the problem of garbage in/garbage out. Trillium provides that platform by which not only can you get your data where you need it to go, but you can also have it clean and you can have it deduped. You can have a better quality of data as it's moving around in your business. When you look at those three aspects together, that’s where Scribe sits in the middle.

Gardner: Let's talk about some examples of how organizations are using these approaches, tools, methods, and technologies to improve their business and their data value. I know that you can’t always name these organizations, but let's hear a few examples of either named or non-named organizations that are doing this well, doing this correctly, and what it gets for them.
If you can raise the productivity of the users, you can raise the top line of the company when you’re talking about a sales organization.

Petrucelli: One that pops to mind, because I just was recently dealing with them, is the Oklahoma City Thunder NBA basketball team. I know that they’re not a humongous enterprise account, but sometimes it's hard for people to understand what's going on inside an enterprise account.

Most people follow and are aware of sports. They have an understanding of buying a ticket, being a season ticket holder, and what those concepts are. So it's a very universal language.

The Thunder had a problem where they were using a ticketing system that would sell the tickets, but they had very little CRM capabilities. All this ticketing was done at the industry standard for ticketing and that was great, but there was no way to track, for example, somebody's preferences. You’d have this record of Jon Petrucelli who buys season tickets and comes to certain games. But that’s it; that’s all you’d have.

They couldn’t track who my favorite player was, how many kids I have, if I was married, where I live, what my blog is, what my Facebook profile  is. People are very passionate about their sports team. They want to really be associated with them, and they want to be connected with those people. And the sports teams really want to do that, too.

So we had a great project, an award winning project. It's won a Gartner award and Microsoft awards. We helped the Oklahoma City Thunder to leverage this great amount of rich interaction data, this transactional data, the ticketing data about every seat they sat in, and every time they bought.

Rich information

That’s a cool record and that might be one line in the database. Around that record, we’re now able to wrap all the rich information from the internet. And that customer, that season ticket holder, wants to share information, so they can have a much more personalized experience.

Without Scribe and without integration we couldn’t do that. We could easily deploy Microsoft CRM and integrate it into the ticketing system, so all this data was in one spot for the users. It was a real true win-win-win, because not only did the Oklahoma City Thunder have a much more productive experience, but their season ticket account managers could now call on someone and could see their preferences. They could see everything they needed to track about them and see all of their ticketing history in one place.

And they could see if they’re attending, if they are not attending, everything about what's going on with that very high-value customer. So that’s a win for them. They can deliver personalized service. On the other end of it, you have the customer, the season ticket holder and they’re paying a lot of money. For some of them, it’s a lifelong dream to have these tickets or their family has passed them down. So this is a strong relationship.

Especially in this day and age, people expect a personalized touch and a personalized experience, and with integration, we were able to deliver that. With Scribe, with the integration with the ticketing system, putting that all in Microsoft CRM where it's real-time, it's accessible and it's insightful.

It’s not just data anymore. It's real time insights coming out of the system. They could deliver a much better user experience or customer experience, and they have been benchmarked against the best customer organizations in the world. The Oklahoma City Thunder are now rated as the top professional sports fan experience. Of all professional sports, they have the top fan experience -- and it's directly relatable to the CRM platform and the data being driven into it through integration.
It’s not just data anymore. It's real time insights coming out of the system.

Percuoco: I’ve seen a couple of pretty interesting use cases. One of them is with one of our technical partnerships. They have a data platform also where they use a behavior account-sharing model. It's very interesting in that they take multiple feeds of different data, like social media data, call-center data, data that was entered into a blog from a website. As Jon said, they create a one-customer view of all of those disparate sources of data including social media and then they map for different vertical industries behavioral churn models.

In other words, before someone churns their account or gets rid of their account within a particular industry -- like insurance, for example -- what steps do they go through before they churn their account? Do they send an e-mail to someone? Do they call the call center? Do they send social media messages? Then, through statistical analysis, they build these behavioral churn models.

They put data through these models of transactional data, and when certain accounts or transactional data fall out at certain parts, they match that against the strategic client list and then decide what to do at the different phases of the account churn model.

I've heard of companies, large companies, saving as much as $100 million in account churn by basically understanding what the clients are doing through these behavioral churn models.

Sentiment analysis

Probably the other most prevalent that I've seen with our clients is sentiment analysis. Most people are looking at social media data, seeing what people are saying about them on social media channels, and then using all different creative techniques to try and match those social media personas to client lists within the company to see who is saying what about them.

Sentiment analysis is probably the biggest use case that I've seen, but the account churn with the behavioral models was very, very interesting, and the platform was very complex. On top, it had a productive analytics engine that had about 80 different modeling graphs and it also had some data visualization tools. So it was very, very easy to create shots and graphs and it was actually pretty impressive.

Gardner: Betsy, do you have any examples that also illustrate what we're talking about when it comes to innovation and value around data gathering analytics and business innovation.

Bilhorn: I’m going to do a little bit of a twist here on that problem. We have had a recent customer, who is one of the top LED lighting franchisors in United States, and they had a different bit of a problem. They have about 150 franchises out there and they are all disconnected.
Sentiment analysis is probably the biggest use case that I've seen.

So, in the central office, I can't see what my individual franchises are doing and I can't do any kind of forecasting or business reporting to be able to look at the health of all my franchises all over the country. That was the problem.

The second problem was that they had decided on a standardized NetSuite platform and they wanted all of their franchises to use these. Obviously, for the individual franchise owner, NetSuite was a little too heavy for them and they said overwhelmingly they wanted to have QuickBooks.

This customer came to us and said, “We have a problem here. We can't find anybody to integrate QuickBooks to our central CRM system and we can't report. We’re just completely flying blind here. What can you do for us?”

Via integration, we were able to satisfy that customer requirement. Their franchises can use QuickBooks, which was easy for them, and then through all of that synchronized information back from all of these franchises into central CRM, they were able to do all kinds of analytics and reporting and dashboarding on the health of the whole business.

The other side benefit, which also makes them very competitive, is that they’re able to add franchises very, very quickly. They can have their entire IT systems up and running in 30 minutes and it's all integrated. So the franchisee is ready to go. They have everything there. They can use a system that’s easy for them to use and this company is able to have them up and are getting their data right away.

Consistency and quality

So that’s a little bit different. Big data is not social, but it’s a problem that a lot of businesses face. How do I even get these systems connected so I can even run my business? This rapid repeatable model for this particular business is pretty new. In the past, we’ve seen a lot of people try to wire things up with custom codes, or every thing is ad hoc. They’re able to stand up full IT systems in 30 minutes, every single time over and over again with a high level consistency and quality.

Gardner: Well we have to begin to wrap it up, but I wanted to take a gauge of where we are on this. It seems to me that we’re just scratching the surface. It’s the opening innings, if you will.

Will we start getting these data visualizations down to mobile devices, or have people inputting more information about themselves, their devices, or the internet of things? Let's start with you, Jon. Where are we on the trajectory of where this can go?

Petrucelli: We’re working on some projects right now with geolocation, geocaching, and geosensing, where when a user on a mobile device comes within a range of a certain store, it will serve that user up, if they have downloaded the app. It will be an app on their smartphone and they have opted into those. It will serve them up special offers to try to pull them into the store the same way in which, if you’re walking by a store, somebody might say, “Hey, Jon.” They know who I am and know my personalization, when I come in a range, it now knows my location.
Integration is really the key to drive high levels of adoption, which drives high levels of productivity.

This is somebody who has an affinity card with a certain retailer, or it could be a sports team in the venue that the organization knows during the venue, it knows what their preferences are and it puts exactly the right offer in front of the right person, at the right time, in the right context, and with the right personalization.

We see some organizations moving to that level of integration. With all of the available technology, with the electronic wallets, now with Google Glass, and with smart watches, there is a lot of space to go. I don’t know if it's really relevant to this, but there is a lot of space now.

We’re more in the business app side of it, and I don’t see that going away. Integration is really the key to drive high levels of adoption, which drives high levels of productivity which can drive top line gain and ultimately a better ROI for the company that’s how we really look it integration.


It’s very, very important to be able to deliver that information, at least in a dashboard format or a summary format on all the mobile devices.
Gardner: What is Scribe Software's vision, and what are the next big challenges that you will be taking your technology to?

Bilhorn: Ideally, what I would like to see, and what I’m hoping for, is that with mobile and consumerization of IT you’re beginning to see that business apps act more like consumer apps, having more standard APIs and forcing better plug and play. This would be great for business. What we’re trying to do, in absence of that, is create that plug-and-play environment to, as Jon said, make it so easy a child can do it.

Seamless integration

Our vision in the future is really flattening that out, but also being able to provide seamless integration experience between this break systems, where at some point you wouldn’t even have to buy middleware as an individual business or a consumer.

The cloud vendors and legacy vendors could embed integration and then be able to have really a plug and play so that the individual user could be doing integration on their own. That’s where we would really like to get to. That’s the vision and where the platform is going for Scribe.

Thursday, December 5, 2013

Service virtualization solves bottlenecks amid complex billing process for German telco

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Podcast Series details how German telco EWE TEL has solved performance complexity across an extended enterprise billing process by using service virtualization.

In doing so, EWE has significantly improved applications performance and quality for their end users, while also gaining predictive insights in the composite application services behavior. The use-case will be featured next week at the HP Discover conference in Barcelona.

To learn more about how EWE is leveraging service virtualization technologies and techniques for composite applications, we recently sat down with  Bernd Schindelasch, Leader for Quality Management and Testing at EWE TEL based in Oldenburg, Germany. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Tell us about EWE TEL, what it does, and what you do there.

Schindelasch
Schindelasch: EWE is a telecommunications company. We operate the network for EWE and we provide a large range of telecommunications services. So we invest a lot of money into infrastructure and we supply the region with high-speed Internet access. EWE TEL was founded in 1996, is a fully owned subsidiary of EWE, and has about 1,400 employees.

Gardner: Your software and IT systems are obviously so important. This is how you interact with your end-users. So these applications must be kept performing.

Schindelasch: Yes, indeed. Our IT systems are very important for us to fulfill our customers’ needs. We have about 40 applications, which are involved in the process of a customer, starting from customer self-service application, to the activation component, and the billing system. It’s a quite complex infrastructure and it’s all based on our IT systems.

We have a special situation here. Because the telecommunications business is very specialized, we need very customized IT solutions. Often, the effort to customize standard software is so high that we decided to develop a lot of our applications on our own.
Developed in house

Nearly half of our applications are developed in house, for example, the customer self service portal I just mentioned, or our customer care system or Activation Manager.

We had to find a way to test it. So we created a team to test all those systems we developed on our own. We recruited personnel from the operating departments and added IT staff, and we started to certify them all as testers. We created a whole new team with a common foundation, and that made it very easy for us to agree on roles, tasks, processes, and so on, concerning our tests. 

Gardner: Tell me about the problem that led you to discover service virtualization as a solution.

Schindelasch: When we created this new team, we faced the problem of testing the systems end to end. When you have 40 applications and have to test an end-to-end process over all of those applications, all the contributing applications have to be available and have to have a certain level of quality to be useful.
We created a whole new team with a common foundation, and that made it very easy for us to agree on roles, tasks, processes, and so on, concerning our tests. 

What we encountered was that the order interface of another service provider was often unavailable and responses from that system were faulty. So we hadn’t been able to test our processes end to end.

We once tried to do a load test and, because of the bottleneck of that other interface, we experienced the failure of that other interface and weren’t able to test our own systems. That’s the reason we needed a solution to bypass this problem with the other interface. That was the initial initiative that we had.

Gardner: Why weren’t traditional testing or scripting technologies able to help you in this regard?

Schindelasch: We tried it. We developed diverse simulations based on traditional mockup scripts. These are very useful for developers to do unit testing, but they weren’t configurable for testers to be used to create the right situations for positive and negative tests.

Additionally, there was a big effort to create these mockups, and sometimes the effort to create the mockup would have been bigger than the real development effort. That was the problem we had.

Complex and costly

Gardner: So any simulations you were approaching were going to be very complex and very costly. It didn't really seem to make sense. So what did you do then?

Schindelasch: We constantly analyzed the market and searched for products that might be able to help us with our problem. In 2012, we found such solutions and finally made a proof of concept (POC) with HP Service Virtualization.

We found that it supported different protocols, all the protocols we needed, and with a rule set to predict the responses. During the POC we found that benefits were both for developers and testers. Even our architects found it to be a good solution. So in the end, we decided to purchase that software this year.

We implemented service virtualization in a pilot project and we virtualized even that order interface we talked about. We had to integrate service virtualization as a proxy between our customer care system and the order system. The actual steps you have to take vary by the used protocols, but you have to put it in between them and let the system work as a proxy. Then, you have the ability to let it learn.
That reduced our efforts and cost in development and testing and it’s the basis for further test automation at low testing cost.

It’s in the middle, between your systems, and records all messages and their responses. Afterward, you can just replay this message response or you can improve the rules manually. For example, you can add data tables so you can configure the system to work with the actual test data you are using for you test cases to be able to support positive and negative tests. 

Gardner: For those folks that aren’t familiar with HP Service Virtualization for composite applications, how has this developed in terms of its speed and its cost? What are some of the attributes of it that appeal to you?

Schindelasch: Our main objective was to find a way to do our end-to-end testing to optimize it, but we were able to gain more benefits by using service virtualization. We’ve reduced the effort to create simulations by 80 percent, which is a huge amount, and have been able to virtualize services that were still under development.

So we have been able to uncouple the tests of the self service application from a new technical feasibility check. Therefore, we’ve been able to test earlier in our processes. That reduced our efforts and cost in development and testing and it’s the basis for further test automation at low testing cost.

In the end, we’ve improved quality. It’s even better for our customers, because we’re able to deliver fast and have a better time to market for new products. 

Future attributes

Gardner: What would you like to see next?

Schindelasch: One important thing is that development is shifting to agile more and more. Therefore, the people using the software have changed. So we have to have better integration with development tools.

From a virtualization perspective, there will be new protocols, more complex rules to address every situation you can think of without complicated scripting or anything like that. I think that’s what’s coming in the future.

Gardner: And, Bernd, has the use of HP Service Virtualization allowed you to proceed toward more agile development and, as well, to start to benefit from DevOps, more tight association and integration between development and deployment and operations?
Service virtualization has the potential to change the performance model, so you can let your application answer slower or faster.

Schindelasch: We already put it together with our development, I think it’s very crucial to cooperate with development and testing, because there wouldn’t be a real benefit to virtualize the service after development already mocked up in an old-fashioned way.

We brought them together. We had the training for a lot of developers. They started to see the benefits and started to use service virtualization the way the testers already did.

We’re working together more closely and earlier in the process. What’s coming in the future is that the developers will start to use service virtualization for their continuous integration, because service virtualization has the potential to change the performance model, so you can let your application answer slower or faster.

If you put it into fast mode, then you use it in continuous integration. That’s a really big benefit for the developers, because their continuous integration will be faster and therefore they will be able to deploy faster. So for our development, it’s a real benefit.

Lessons learned

Gardner: Could you offer some insights to those who are considering the use of service virtualization with composite applications now that you have been doing it? Are there any lessons learned? Are there any suggestions that you would make for others as they begin to explore new service virtualization?

Schindelasch: One thing I’ve already mentioned is that it’s important to work together with development and testing. To gain maximum benefit from HP Service Virtualization, you have to design your future solutions. What service do you want to virtualize, which protocols will you use, and where are the best places to intercept? Do I want to replace real systems or create the whole environment as virtualized? In which way do I want to use performance model and so on?

It’s very important to really understand what your needs are before you start using the tools and just virtualize everything. It’s easy to virtualize, but there is no real benefit if you virtualize a lot of things you didn’t really want. As always, it’s important to think first, design your future solutions, and then start to do it.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, December 4, 2013

Identity and access management as a service gets boost with SailPoint's IdentityNow cloud

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SailPoint Technologies.

Business trends like bring your own device (BYOD) are forcing organizations to safely allow access to all kinds of applications and resources anytime, anywhere, and from any device.

According to research firm MarketsandMarkets, the demand for improved identity and access management (IAM) technology is estimated to grow from more than $5 billion this year to over $10 billion in 2018.

The explosive growth -- doubling of the market in five years -- will also fuel the move to more pervasive use of identity and access management as a service (IDaaS). The cloud variety of IAM will be driven on by the need for pervasive access and management over other cloud, mobile, and BYOD activities, as well as by the consumerization of IT and broader security concerns.

To explore the why and how of IDaaS, BriefingsDirect recently sat down with Paul Trulove, Vice President of Product Marketing at SailPoint Technologies in Austin, Texas, to explore the changing needs for -- and heightened value around -- improved IAM.

We also discover how new IDaaS offerings are helping companies far better protect and secure their information assets. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: SailPoint is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: The word "control" comes up so often when I talk to people about security and IT management issues, and companies seem to feel that they are losing control, especially with such trends as BYOD. How do companies regain that control, or do we need to think about this differently?

Trulove: The reality in today's market is that a certain level of control will always be required. But as we look at the rapid adoption of new corporate enterprise resources, things like cloud-based applications or mobile devices where you could access corporate information anywhere in the world at any time on any device, the reality is that we have to put a base level of controls in place that allow organizations to protect the most sensitive assets. But you have to also provide ready access to the data, so that the organizations can move at the pace of what the business is demanding today.

Gardner: The expectations of users has changed, they're used to having more of their own freedom. How is that something that we can balance, allow them to get the best of their opportunity and their productivity benefits, but at the same time, allow for the enterprise to be as low risk as possible?

Trulove
Trulove: That's the area that the organization has to find the right balance for their particular business that meets the internal demands, the external regulatory requirements, and really meet the expectations of their customer base. While the productivity aspect can't be ignored, taking a blind approach to allowing an individual end-user to begin to migrate structured data out of something like an SAP or other enterprise resource planning (ERP) systems, up to a personal Box.com account is something most organizations are just not going to allow.

Each organization has to step back, redefine the different types of policies that they're trying to put in place, and then put the right kind of controls that mitigate risk in terms of inappropriate acts, access to critical enterprise resources and data, but also allow the end user to have a little bit more control and little bit more freedom to do things that make them the most productive.

Uptake in SaaS

Gardner: We've seen a significant uptake in SaaS, certainly at the number of apps level, communications, and email, but it seems as if some of the infrastructure services around IAM are lagging. Is there a maturity issue here, or is it just a natural way that markets evolve? What's the case in understanding why the applications have gone fast, but we're now just embarking on IDaaS?

Trulove: We're seeing a common trend in IT if you look back over time, where a lot of the front-end business applications were the first to move to a new paradigm. Things like ERP and service resource management (SRM)-type applications have all migrated fairly quickly.

Over the last decade, we've really seen a lot of the sales management applications, like Salesforce and NetSuite come on as full force. Now, there are things like Workday and even some of the work force management becoming very popular. However, the infrastructure generally lagged for a variety of reasons.

In the IAM space, this is a critical aspect of enterprise security and risk management as it relates to guarding the critical assets of the organization. Security practitioners are going to look at new technology very thoroughly before they begin to move things like IAM out to a new delivery paradigm such as SaaS.

The other thing is that organizations right now are still fundamentally protecting internal applications. So there's less of a need to move your infrastructure out into the cloud until you begin to change the overall delivery paradigm for your internal application.
As customers implement more and more of their software out in the cloud, that's a good time for them to begin to explore IDaaS.

What we're seeing in the market, and definitely from a customer perspective, is that as customers implement more and more of their software out in the cloud, that's a good time for them to begin to explore IDaaS.

Look at some of the statistics being thrown around. In some cases, we've seen that 80 percent of new software purchases are being pushed to a SaaS model. Those kinds of companies are much more likely to embrace moving infrastructure to support that large cloud investment with fewer applications to be managed back in the data center.

Gardner: The notion of mobile-first applications now has picked up in just the last two or three years. I have to imagine that's another accelerant to looking at IAM differently when you get to the devices. How does the mobile side of things impact this?

Trulove: Mobile plays a huge part in organizations' looking at IDaaS, and the reason is that you’re moving the device that's interacting with the identity management service outside the bounds of the firewall and the network. So, having a point of presence in the cloud gives you a very easy way to generate all of the content out to the devices that are being operated outside of the traditional bounds of the IT organization, which was generally networked in to the PCs, laptops, etc that are on the network itself.

Moving to IDaaS

Gardner: I'd like to get into what hurdles organizations need to overcome to move in to IDaaS, but let's define this a little better for folks that might not be that familiar with it. How does SailPoint define IDaaS? What are we really talking about?

Trulove: SailPoint looks at IDaaS as a set of capabilities across compliance and governance, access request and provisioning, password management, single sign-on (SSO), and Web access management that allow for an organization to do fundamentally the same types of business processes and activities that they do with an internal IAM systems, but delivered from the cloud.

We also believe that it's critical, when you talk about IDaaS to not only talk about the cloud applications that are being managed by that service, but as importantly, the internal applications behind the firewall that still have to be part of that IAM program.

Gardner: So, this is not just green field. You have to work with what's already in place, and it has to work pretty much right the first time.

Trulove: Yes, it does. We really caution organizations against looking at cloud applications in a siloed manner from all the things that they're traditionally managing in the data center. Bringing up a secondary IAM system to only focus on your cloud apps, while leaving everything that is legacy in place, is a very dangerous situation. You lose visibility, transparency, and that global perspective that most organizations have struggled to get with the current IAM approaches across all of those areas that I talked about.
We see a little bit less of the data export concerns with companies here in the US, but it's a much bigger concern for companies in Europe and Asia in particular.

Gardner: So, we recognize that these large trends are forcing a change, users want their freedom, more mobile devices, more different services from different places, and security being as important if not more than ever. What is holding organizations back from moving towards IDaaS, given that it can help accommodate this very complex set of requirements?

Trulove: It can. The number one area, and it's really made up of several different things, is the data security, data privacy, and data export concerns. Obviously, the level at which each of those interplay with one another, in terms of creating concern within a particular organization, has a lot to do with where the company is physically located. So, we see a little bit less of the data export concerns with companies here in the US, but it's a much bigger concern for companies in Europe and Asia in particular.

Data security and privacy are the two that are very common and are probably at the top of every IT security professional’s list of reasons why they're not looking at IDaaS.

Gardner: It would seem that just three or four years ago, when we were talking about the advent of cloud services, quite a few people thought that cloud was less secure. But I’ve certainly been mindful of increased and improved security as a result of cloud, particularly when the cloud organization is much more comprehensive in how they view security.

They're able to implement patches with regularity. In fact, many of them have just better processes than individual enterprises ever could. So, is that the case here as well? Are we dealing with perceptions? Is there a case to be made for IDaaS being, in fact, a much better solution overall?

IAM as secure

Trulove: Much like organizations have come to recognize the other categories of SaaS as being secure, the same thing is happening within the context of IAM. Even a lot of the cloud storage services, like Box.com, are now signing up large organizations that have significant data security and privacy concerns. But, they're able to do that in a way and provide the service in a way where that assurance is in place that they have control over the environment.

And so, I think the same thing will happen with identity, and it's one of the areas where SailPoint is very focused on delivering capabilities and assurances to the customers that are looking at IDaaS, so that they feel comfortable putting the kinds of information and operating the different types of IAM components, so that they get over that fear of the unknown.

One of the biggest benefits of moving from a traditional IAM approach to something that is delivered as IDaaS is the rapid time to value. It's also one of the biggest changes that the organization has to be prepared to make, much like they would have as they move from a Siebel- to a Salesforce-type model back in the day.
IAM delivered as a service needs to be much more about configuration, versus that customized solution where you attempt to map the product and technology directly back to existing business processes.
The benefit that they get out of that is a much lower total cost of ownership (TCO), especially around the deployment aspects of IDaaS.

One of the biggest changes from a business perspective is that the business has to be ready to make investments in business process management, and the changes that go along with that, so that they can accommodate the reality of something that's being delivered as a service, versus completely tailoring a solution to every aspect of their business.

The benefit that they get out of that is a much lower total cost of ownership (TCO), especially around the deployment aspects of IDaaS.

Gardner: It's interesting that you mentioned business process and business process management. It seems to me that by elevating to the cloud for a number of services and then having the access and management controls follow that path, you’re able to get a great deal of flexibility and agility in how you define who it is you’re working with, for how long, for when.

It seems to me that you can use policies and create rules that can be extended far beyond your organization’s boundaries, defining workgroups, defining access to assets, creating and spinning up virtualized companies, and then shutting them down when you need. So, is there a new level of consideration about a boundaryless organization here as well?

Trulove: There is. One of the things that is going to be very interesting is the opportunity to essentially bring up multiple IDaaS environments for different constituents. As an organization, I may have two or three fundamentally distinct user bases for my IAM services.

Separate systems

I may have an internal population that is made up of employees, and contractors that essentially work for the organization that need access to a certain set of systems. So I may bring up a particular environment to manage those employees that have specific policies and workflows and controls. Then, I may bring up a separate system that allows for business partners or individual customers to have access to very different environments within the context of either cloud or on-prem IT resources.

The advantage is that I can deploy these services uniquely across those. I can vary the services that are deployed. Maybe I provide only SSO and basic provisioning services for my external user populations. But for those internal employees, I not only do that, but I add access certifications, and segregation of duties (SOD) policy management. I need to have much better controls over my internal accounts, because they really do guard the keys to the kingdom in terms of data and application access.

Gardner: We began this conversation talking about balance. It certainly seems to me that that level of ability, agility, and defining new types of business benefits far outweighs some of the issues around risk and security that organizations are bound to have to solve one way or the other. So, it strikes me as a very compelling and interesting set of benefits to pursue.

You've delivered the SailPoint IdentityNow suite. You have a series of capabilities, and there are more to come. As you were defining and building out this set of services, what were some of the major requirements that you had, that you needed to check off before you brought this to market?

Trulove: The number one capability that we really talk to a lot of customers about is an integrated set of IAM services that span everything from that compliance and governance to access request provisioning and password management all the way to access management and SSO.
They can get value out of it, not necessarily on day one, but within weeks, as opposed to months.

One of the things that we found as a critical driver for the success of these types of initiatives within organizations is that they don't become siloed, and that as you implement a single service, you get to take advantage of a lot of the work that you've done as you bring on the second, third, or fourth services.

The other big thing is that it needs to be ready immediately. Unlike a traditional IAM solution, where you might have deployment environments to buy and implement software to purchase and deploy and configure, customers really expect IDaaS to be ready for them to start implementing the day that they buy.

It's a quick time-to-value, where the organization deploying it can start immediately. They can get value out of it, not necessarily on day one, but within weeks, as opposed to months. Those things were very critical in deploying the service.

The third thing is that it is ready for enterprise-level requirements. It needs to meet the use cases that a large enterprise would have across those different capabilities, but also as important, that it meets data security, privacy, and export concerns that a large enterprise would have relative to beginning to move infrastructure out to the cloud.

Even as a cloud service, it needs a very secure way to get back into the enterprise and still manage the on-prem resources that aren’t going away anytime soon. n one hand we would talk to customers about managing things like Google Apps, Salesforce and Workday. In the same breath, they also talk about still needing to manage the mainframe and the on-premises enterprise ERP system that they have in place.

So, being able to span both of those environments to provide that secure connectivity from the cloud back into the enterprise apps was really a key design consideration for us as we brought this product to market.

Hybrid model

Gardner: It sounds if it's a hybrid model from the get-go. We hear about public cloud, private cloud, and then hybrid. It sounds as if hybrid is really a starting point and an end point for you right away.

Trulove: It's hybrid only in that it's designed to manage both cloud and on-prem applications. The service itself all runs in the cloud. All of the functionality, the data repositories, all of those things are 100 percent deployed as a service within the cloud. The hybrid nature of it is more around the application that it's designed to manage.

Gardner: You support a hybrid environment, but I see, given what you've just said, that means that all the stock in trade and benefits as a service offering are there, no hardware or software, going from a CAPEX to OPEX model, and probably far lower cost over time were all built in.

Trulove: Exactly. The deployment model is very much that classic SaaS, a multitenant application where we basically run a single version of the service across all of the different customers that are utilizing it.

Obviously, we've put a lot of time, energy, and focus on data protection, so that everybody’s data is protected uniquely for their organization. But we get the benefits of that SaaS deployment model where we can push a single version of the application out for everybody to use when we add a new service or we add new capabilities to existing services. We take care of upright processes and really give the customers that are subscribing to the services the option of when and how they want to turn new things on.
We've put a lot of time, energy, and focus on data protection, so that everybody’s data is protected uniquely for their organization.

The IdentityNow suite is made up of multiple individual services that can be deployed distinctly from one another, but all leverage a common back-end governance foundation and common data repository.

The first service is SSO and it very much empowers users to sign on to cloud, mobile, and web applications from a single application platform. It provides central visibility for end users into all the different application environments that they maybe interacting with on a daily basis, both from a launch-pad type of an environment, where I can go to a single dashboard and sign on to any application that I'm authorized to use.

Or I may be using back-end Integrated Windows Authentication, where as soon as I sign into my desktop at work in the morning, I'm automatically signed into all my applications as I used them during the day, and I don’t have to do anything else.

The second service is around password management. This is enabling that end-user self-service capability. When end users need to change their password or, more commonly, reset them because they’ve forgotten them over a long weekend, they don’t have to call the help desk.

Strong authentication

They can go through a process of authenticating through challenge questions or other mechanisms and then gain access to reset that password and even use some strong authentication mechanisms like one-time password tokens that are going to be issued, allow the user to get in and then, change that password to something that they will use on an ongoing basis.

The third service is around access certifications, and this automates that process of allowing organizations to put in place controls through which managers or other users within the organization are reviewing who has access to what on a regular basis. It's a very business-driven process today, where an application owner or business manager is going to go in, look at the series of accounts and entitlements that a user has, and fundamentally make a decision whether that access is correct at a point in time.

One of the key things that we're providing as part of the access certification service is the ability to automatically revoke those application accounts that are no longer required. So there's a direct tie into the provisioning capabilities of being able to say, Paul doesn’t need access to this particular active directory group or this particular capability within the ERP system. I'm going to revoke it. Then, the system will automatically connect to that application and terminate that account or disable that account, so the user no longer has access.

The final two services are around access request and provisioning and advanced policy and analytics. On the access request and provisioning side, this is all about streamlining, how users get access. It can be the automated birth-right provisioning of user accounts based on a new employee or contractor joining new organization, reconciling when a user moves to a new role, what they should or should not have, or terminating access on the back end when a user leaves the organization.
What most customers see, as they begin to deploy IDaaS is the ability to get value very quickly.

All of those capabilities are provided in an automated provisioning model. Then we have that self-service access request, where a user can come in on an ad-hoc basis and say, "I'm starting a new project on Monday and I need some access to support that. I'm going to go in, search for that access. I'm going to request it." Then, it can go through a flexible approval model before it actually gets provisioned out into the infrastructure.

The final service around advanced policy and analytics is a set of deeper capabilities around identifying where risks lie within the organization, where people might have inappropriate access around a segregation of duty violation.

It's putting an extra level of control in place, both of a detective nature, in terms of what the actual environment is and which accounts that may conflict that people already have. More importantly, it's putting preventive controls in place, so that you can attach that to an access request or provisioning event and determine whether a policy violation exists before a provisioning action is actually taken.

Gardner: What are your customers finding now that they are gaining as a result of moving to IDaaS as well, as the opportunity for specific services within the suite? What do you get when you do this right?

Trulove: What most customers see, as they begin to deploy IDaaS is the ability to get value very quickly. Most of our customers are starting with a single service and they are using that as a launching pad into a broader deployment over time.

So you could take SSO as a distinct project. We have customers that are implementing that SSO capability to get rapid time to value that is very distinct and very visible to the business and the end users within their organization.

Password management

Once they have that deployed and up and running, they're leveraging that to go back in and add something like password management or access certification or any combination thereof.

We’re not stipulating how a customer starts. We're giving them a lot of flexibility to start with very small distinct projects, get the system up and running quickly, show demonstrable value to the business, and then continue to build out over time both the breadth of capabilities that they are using but also the depth of functionality within each capability.

Mobile is driving a significant increase in why customers are looking at IDaaS. The main reason is that mobile devices operate outside of the corporate network in most cases. If you're on a smartphone and you are on a 3G, 4G, LTE type network, you have to have a very secure way to get back into those enterprise resources to perform particular operations or access certain kinds of data.
One of the benefits that an IDaaS service gives you is a point of presence in cloud that allows the mobile devices to have something that is very accessible from wherever they are. Then, there is a direct and very secure connection back into those on-prem enterprise resources as well as out to the other cloud applications that you are managing.
The other big thing we're seeing in addition to mobile devices is just the adoption of cloud applications.

The reality in a lot of cases is that, as organizations add those BYOD type policies and the number of mobile devices that are trying to access corporate data increase significantly, providing an IAM infrastructure that is delivered from the cloud is a very convenient way to help bring a lot of those mobile devices under control across your compliance, governance, provisioning, and access request type activities.

The other big thing we're seeing in addition to mobile devices is just the adoption of cloud applications. As organizations go out and acquire multiple cloud applications, having a point of presence to manage those in the cloud makes a big difference.

In fact, we've seen several deployment projects of something like Workday actually gated by needing to put in the identity infrastructure before the business was going to allow their end users to begin to use that service. So the combination of both mobile and cloud adoption are driving a renewed focus on IDaaS.

If you look at the road map that we have for the IdentityNow product, the first three services are available today, and that’s SSO, password management, and access certification. Those are the key services that we're seeing businesses drive into the cloud as early adopters. Behind that, we'll be deploying the access request and provisioning service and the advanced policy and analytic services in the first half of 2014.
Continued maturation

Beyond that, what we're really looking at is continued maturation of the individual services to address a lot of the emerging requirements that we're seeing from customers, not only across the cloud and mobile application environments, but as importantly as they begin to deploy the cloud services and link back to their on-prem identity and access management infrastructure, as well as the applications that they are continuing to run and manage from the data center.

Gardner: So, more inclusive, and therefore more powerful, in terms of the agility, when you can consider all the different aspects of what falls under the umbrella of IAM.

Trulove: We're also looking at new and innovative ways to reduce the deployment timeframes, by building a lot of capabilities that are defined out of the box. These are  things like business processes, where there will be catalog of the best practices that we see a majority of customers implement. That has become a drop-down for an admin to go in and pick, as they are configuring the application.

We'll be investing very heavily in areas like that, where we can take the learning as we deploy and build that back in as a set of best practices as a default to reduce the time required to set up the application and get it deployed in a particular environment.

Tuesday, December 3, 2013

BI and big data analytics force an overdue reckoning between IT and business interests

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Dell Software.

The relationship between enterprise IT and lines of business leadership has not always been rosy. Sometimes IT holds the upper hand, and sometimes the business does an end-run around IT to use new tools or processes. They might even call it innovation.

Today, with the push toward big data and business intelligence (BI), a new chasm is growing between enterprise IT groups and business units. But, in this case, it could be disastrous because IT should be a big part of the big data execution.

The next BriefingsDirect discussion therefore examines how an ebb and flow between IT centralization and decentralization that swings in the direction of business groups, and even shadow IT, now runs the risk of neglecting essential management security and scalability requirements.

Indeed, big data and analytics should actually force more collaboration and lifecycle-based relationships among and between business and IT groups. For those organizations -- where innovation is being divorced from IT discipline -- we'll explore ways that a comprehensive and virtuous adoption of rigorous and protected data insights can both make the business stronger and make IT more valued.

To get to the bottom of why, BriefingsDirect recently sat down with John Whittaker, Senior Director of Marketing for Dell Software's Information Management Solutions Group. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Dell Software is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: John, we seem to go back and forth between resources in organizations being tightly controlled and governed by IT, and then resources and control resting largely with the line of business or even, as I mentioned, with a shadow IT group of some sort. So over the past 20 or more years, why has this problem been so difficult to overcome? Why is it persistent? Why do we keep going back and forth?

Whittaker: That’s an interesting question, and I agree. I've been in IT for longer than 20 years and certainly in your study of history you can see that this ebb and flow of centralized management to gain some constraints or some controls in governance and security has been one of the primary motivators of IT. It’s one of the big benefits they provide, but in the backdrop, you have lines of business that want to innovate and want to go in new directions.

Whittaker
We’re entering one of those times right now with big data and the advent of analytics, and it’s driving lines of business to push into these new technologies, and maybe  in ways that IT isn’t ready for just yet.

This, as you mentioned, has been going on for some time. The last iteration where this occurred was back in the ’90s when e-commerce and the Web captured the imagination of business. We saw a lot of similarities to what's occurring today.

Big-data push

It ultimately caused some problems back in the ’90s around e-commerce and leveraging this great new innovation of the Internet, but doing it in a way that was more decentralized. It was a little bit more of the Wild West-based approach and ultimately led to some pretty significant issues that I think we are going to see out of the big data and analytics push that’s occurring right now.

Gardner: I suppose to be fair to each constituency here, it’s the job of IT to be cautious and to try to dot all the i’s and cross the t’s. There were a lot of people in 1996-97 who didn’t necessarily think the Internet was going to be that big of a thing, it seemed to have lots of risk associated with it. So, I suppose due diligence needed to be brought to bear.

http://software.dell.comOn the other hand, if the businesses didn’t recognize that this could be a huge opportunity and we needed to take those risks -- create a website, and enter into a direct dialogue with customers to a new channel -- they would have missed a big opportunity. So these are sort of natural roles, but they can’t be too brittle.

Whittaker: You’re absolutely right. At their core, both groups had, and have, good motivations. IT lives in a world of constraints, of governance, security, and of needing to deliver something that’s going to be stable, that’s going to scale, that’s going to be secure, and that’s not going break governance.
Nobody in either group is trying to harm the business or anything close to it.

Those are laudable goals to have in mind. From the line-of-business perspective, the business wants to innovate and doesn’t want to be outmoded by its competitors. They rightfully see that all these great innovations are coming, and analysts, pundits, and experts are talking about how this is going to make a huge difference for businesses.

So they inevitably want to embrace those, and you have this cognitive dissonance occurring between the IT goals around constraints and the desire to keep things running in a clean and efficient manner. IT is seeing this new technology and saying, “Hold on. We don’t necessarily want to jump into this. This is going to break our model.”

Ultimately, IT gets to a point where maybe they suggest we shouldn’t do it or we should push it off for some time. That’s where the chasm between the two gets started. From the business perspective, the answer “no” is unacceptable, if they feel that’s what they need to do to achieve success in business. They own the profit and loss responsibilities. That’s where these problems come from.

Nobody in either group is trying to harm the business or anything close to it. They just have different motivations and perspectives on how to approach something, and when one gets wildly far apart from the other, that’s where these problems tend to occur. Again, when these big innovation cycles happen, you’re more likely to see a lot of these problems start to occur.

I definitely remember back in 1996-1997. We didn’t call it shadow IT at the time, but you saw IT-like personnel being hired into functional business areas to institute these new technologies, and that ultimately led to a pretty serious hangover at the end of that innovation cycle.

Gardner: What’s the risk of ignoring IT, doing an end-run around them, or downplaying the role? What form does it take?

On their own

Whittaker: Ignoring IT can have some pretty serious problems. It all starts with the fact that, and by and large, businesses can embrace these new technologies without the aid of IT.  Cloud-based implementations have made it possible for lines of business to rapidly deploy some of these new big data technologies, and you have vendors in some cases telling them they don’t need IT’s help. So it’s not all that difficult for lines of business to go out on their own and implement a big data technology.

But they don’t typically have the discipline to apply across-the-board governance capabilities and discipline into their deployment and that leads to potential issues with regulatory requirements. It also leads to security issues, and ultimately can lead to problems where you have seriously bad data management issues.

You have data sunk in silos, and maybe the CEO wants to know how much business we’re doing with x, y, and z. No one can deliver that, because we call x, y, and z, something in one system, a different name in another system, and a different name in the third system. Trying to pull that data together becomes really difficult. When you have lines of business independently operating disparate solutions, those core governance issues tend to break down.

Additionally, although they are great at spotting innovation opportunities, line of business people are not necessarily in the business of building scalable, secure, stable environments. That’s not the core of, say, marketing. They need to understand how the technology can be leveraged, but maintaining and managing it is not core to their charter. It tends to be ignored.
There are a lot of lessons that can be learned from the concept of working closely together, iterating rapidly, and being open to innovation and the idea that changes occur.

Gardner: John, it strikes me that there are some examples within IT that help understand this potential problem and even grab some remediation, and that’s in software development. We’ve seen the complexity in groups working without a lot of coordination and shared process insights and have run aground.

For many years, we saw a very high failure rate among software development projects, but more recently, we’ve seen improvements -- agile, scrum, opening up the process, small iterative steps that then revert back to an opportunity to take stock and know what everyone is doing, checking in, checking out with centralization -- but without stifling innovation. Is there really a lesson here in what’s happened within software development that could be brought to the whole organization?

Whittaker: Absolutely. In fact, within Dell Software itself we embrace agile and use scrum internally. There are a lot of lessons that can be learned from the concept of working closely together, iterating rapidly, and being open to innovation and the idea that changes occur.

Particularly in these major innovation cycles, it’s important to go with the flow and implement some of these new technologies and new capabilities early, so you can have that brain trust built internally among the broad team. You don’t want IT to hold the reins entirely, and at the same time, you don’t want line of business to do it.

We really need to break that model, that back and forth, centralization-decentralization swing that keeps occurring. We need to get to a point where we really are partnering and have good collaboration, where innovation can be embraced and adopted, and the business can meet its goals. But it has to be done in a way that IT can implement sound governance and implement solutions that can scale, are stable, are reliable, and are going to lead to long-term success.

Back-and-forth

Gardner: What’s different this time, John? Are the stakes higher because we’re talking about data analysis? That’s basically intelligence about what’s going on within your markets, your organization, your processes, your supply chain, your ecosystem, all of which could have a huge bearing.

We have the ability now to tackle massive amounts of data very rapidly, but if we don’t bring this together holistically, it seems as if there is a larger risk. I’m thinking about a competitive risk. Others that do this well could enter your market and really disrupt.

Whittaker: You’re absolutely right. There’s great potential benefit that organizations receive or can get out of leveraging big data and analytics, that of being able to determine predictively what is going to occur in their business and what are the most efficient routes to market and what areas of improvements can occur.

The businesses that leverage this are going to outmode, outperform, and ultimately win in the markets currently dominated by organizations who aren’t paying attention and who aren’t implementing solutions today. They’re getting a little bit ahead of this cycle so that they are ready and are able to be successful down the road.
We’re really moving into an era where the context of what’s happening is critically important.

We’re really moving into an era where the context of what’s happening is critically important. A data-driven management model is going to be embraced and it’s ultimately going to lead to more successful organizations. Companies and organizations that embrace this today are going to be the winners tomorrow.

If you’re ignoring this or putting this off, you’re really taking a tremendous risk, because this next iteration of innovation that’s occurring around analytics applies to large data sources. It’s being able to build the correlations and determine that this is a more efficient approach, or conversely, that we have a problem with this outlier that’s going to give us issues down the road.

If you’re not doing that as an organization, you really are running a pretty tremendous risk that somebody else is going to walk in and be able to make smarter decisions, faster.

Gardner: At the same time, your customers are gaining insights into how to procure all the better. And so any rewards that might be out there, if you are in a sales role of any kind, would become much more apparent.

Whittaker: That’s definitely true as well. The construct and the conversation has really shifted. With the advent of social media and the pace at which information is shared and opinions are made, it’s no longer the company that is the primary voice about its products and its capabilities or its positions and point of views.

Customers more empowered

It needs to have those. It needs to get them out. It needs to push them. But in this new world we live in, the customers are so much more empowered than they have ever been before, and it should be a good thing. For companies that are delivering great products and solving real problems for their customers, this should be great news.

If you’re not listening to what your customers are saying in social media and if you’re not paying attention to the ongoing story line and conversation of your firm in the social sphere, you’re really putting yourself at risk. You’re missing out on a tremendous opportunity to engage with your customers in a new, interesting, and very useful way.

That’s a lot of what we built. We have a lot of capabilities here at Dell Software around data management, data integration, and data analysis. On the analysis side, we spend a great deal of time with products like Kitenga and our social networking analytics platforms to do that semantic analysis and look into that form of big data.

But big data is more than just social. It’s also sensor data. The iterative thing is another area where businesses should be innovating and organizations should be pushing to take advantage of it. That’s where line of business should be saying, “We need to get out into this area, or if we don’t, we’re going to be outmoded by our competitors.” And IT should be encouraging it. They should be pushing for more innovation, bringing new ideas, and being a real partner and collaborator at the table within the business and organization. That’s the right way to do this.
IT could use big data analytics to improve its own environment and to answer this crisis of confidence that exists.

And IT itself should be applying some of these technologies. In fairness to line of business, there exists a bit of a crisis of confidence in IT, and there’s really no better way to push against that or fight against that then to be able to run analytics on the solutions you’re providing. How well is IT performing? Are you benchmarking against past performance? How do you benchmark against your industry?
That’s another component. Big-data analytics can be utilized by IT not just to deliver capabilities to the organization or push out and help with connecting to the customer. IT could use big data analytics to improve its own environment and to answer this crisis of confidence that exists.

You could turn these tools internally and look at rates of response as compared to your industry, how your network is performing, how your database is performing, or how the code you write is performing. Are your developers efficient in building clean code?

Everybody has been watching the major shift in the healthcare environment in North America. A big component of that probably should have been more benchmark analysis, analytics on code quality, and things of that nature. That’s a great current and topical example of how IT should be utilizing some of these technologies, not just externally, not just bringing it to line of business, but within its own environment, to prove that it’s building systems that are going to be scalable, secure, and stable.

Gardner: What needs to take place in order for this higher level of coordination and collaboration to take place? Are there any key steps that you have in mind for embarking on this?

Four key areas

Whittaker: I think that there are four key areas that need to occur for this collaboration to happen. Number one, senior executives need to be aligned to what the organization is trying to achieve. They need to articulate a common vision that accounts for the shared interest of both IT and line of business and make it clear that they expect collaboration. That should come at the top of the organization.

We need to get out of the smoke-stacked, completely siloed, organizational approaches and get to something that’s far, far more collaborative, and that needs to come from the top. The current approach is not acceptable. These groups need to work together. That’s a key component. If you don’t have buy-in at the top, it makes it really hard for this collaboration to occur.

Number two, IT needs to get its house in order. This means many things, but primarily, it means overcoming the crisis of confidence line of business has in IT by coming to the table with an approach that works for line of business, something that business aligns with such that it feels like it has IT involvement and that they’re buying into the future that the business wants to head towards. IT needs to show that they have a plan that does not compromise the innovations that the business needs.

IT absolutely can no longer just say no. That’s not an acceptable position. Certainly, if you look back, there were IT organizations that were saying, “No, we’re not going to connect to the Internet. It’s not secure. The answer is just going to be no.”

That didn’t work out for them and it’s not going to work out here either. They should be embracing this shift. We shouldn’t perpetuate this cycle by driving more shadow IT and creating ultimately more for IT down the road as inevitable problems start to emerge.
We shouldn’t perpetuate this cycle by driving more shadow IT and creating ultimately more for IT down the road as inevitable problems start to emerge.

Number three, clear the air and put the executive plan in place. Tensions between IT and line of business have gotten to the point where they can’t be ignored any more. Put the stakeholders together in a room, air out the difficulties, and move forward with a clean slate. This is a tremendous opportunity to build a plan that meets both parties’ needs and allows them to start executing on something that’s really going to make a huge impact for the business.

Finally, the fourth point, seek solutions that emphasize collaboration between IT and the business. Many vendors today are encouraging groups to go rogue and operate in silos, and that’s causing a lot of the problem. At Dell, we’re much more about pushing a more collaborative approach. We think IT is terrific, but business has a point. They need innovation and they need IT to step up. And the business needs to embrace IT.

Instead of conflicting with each other and doing your own thing, back up your commitment to collaboration and utilize tools that empower it. That’s where we’re going to win, and that’s how business is going to succeed in the future.

This isn’t something that the G20, the Fortune 500, or Fortune 2000 alone can benefit from. This goes way down in the hierarchy, in the stack, certainly down to the small- and medium-sized business (SMB) level. And maybe even lower. If you’re a data-intensive small business, you probably need to start implementing and taking a look at big data and what analytics based approaches and data-driven decision making opportunities exist within your organization, or you will be outmoded by organizations that do embrace that.

Cloud-based approach

More and more, we’re seeing, particularly in the mid-market, embracing of a cloud-based approach. It's important to point out that that approach is fine and terrific. We love the cloud and we’re big proponents of it, but using a cloud-based solution doesn’t free line of business from the need to collaborate with IT. It will not eliminate this problem.

We’re seeing terrific IT departments and leadership starting to take a larger role, starting to ultimately become drivers of innovation. That’s really what we want to see. All businesses want the same thing. They want to find sustainable competitive advantages. They want to control spending. They want to reduce risk to the business.
And the most effective and efficient path to achieving all three is getting IT and the business aligned and allowing that collaboration to occur. That’s really at the crux of how businesses are going to gain competitive advantage out of technology in the future.

Embrace new technology

The big points are, embrace the new technology that’s coming out. The innovation is going to make your business far more successful, and your organization will prosper from these new innovations that will occur.

Number two, do it in a manner that is collaborative between IT and line of business. The CIO, the CMO, the CFO, the CEO, the heads of all of the functional departments, whether you are in sales, marketing, finance, operation, wherever you are, should be aligning with their IT counterparts. It's the combined collaborative approach that’s going to win the day.

And finally, this should really be driven top-down. Senior executives, this is an opportunity to get everybody on the same page to go after and leverage a pretty enormous opportunity before it becomes a huge problem. Let’s get out there right now. We’re still in the early days, but that doesn’t mean there’s not a lot to be gained. And ultimately, in the long-term, we’re going to have more successful organizations able to achieve even greater output through this collaboration and the leveraging of big data analytics.