Friday, April 29, 2016

Capgemini and HPE team up to foster needed behavioral change that bolsters cyber security across application lifecycles

The next BriefingsDirect discussion explores improving cyber security in applications across their entire lifecycles. Such trends as the Internet of Things (IoT), hybrid cloud services, mobile-first, and DevOps are increasing the demands and complexity of the overall development process.

Key factors to improving both development speed and security despite these new challenges include new levels of collaboration and communication across formerly disparate teams -- from those who design, to coders, to testers, and on to continuous monitoring throughout operations. The result is security being integrated into software design, even as the pressure builds to bring more apps to market faster.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We're here now with two experts from a Capgemini and Hewlett Packard Enterprise (HPE) Alliance to learn how to create the culture, process, and technologies needed to make and keep today's applications as secure as possible.

Please join me now in welcoming our guests, Gopal Padinjaruveetil, Global Cyber Security Strategist for Capgemini, and Mark Painter, Security Evangelist at Hewlett Packard Enterprise. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s start with you Gopal. What do you see as some of the top trends that are driving the need for improved security for applications? It seems like we're in the age of "continuous everything" across the entire spectrum of applications.


Padinjaruveetil: Let me talk about a few trends with some data and focus on why application security is going to become more-and-more important as we move forward.

There's a report saying that there will be 50 billion connected devices by 2020. There was also a Cisco report that said that 92 percent of the devices today, connected devices, are vulnerable. There was an HPE study that came out last year said that 80 percent of the attacks are now happening at the application layer.
Read the Latest Insights
On How to Protect
Your Enterprise Applications
If you put together these three diverse data points coming from three different people, we see that there will be 37 billion devices in 2020 that are deemed to be vulnerable. That’s very interesting, 37 billion devices vulnerable in 2020. We need to change the way that we develop software.

Key trend

The other key trend that we're seeing is that agility is becoming a prime driver in application development, where the business would like to have functionality as early as possible. So the whole agile development methodology driving agility is becoming key, and that's posing some unique problems.

Padinjaruveetil
The other thing that we're seeing from a trend perspective is that apps and data are moving out of the enterprise landscape. So the concept of mobile-first, free the data, free the app, and the cloud movement are major trends that affects the application security and how applications are being developed and delivered.

The other trend is regulators. In many critical industries regulations are becoming very strict with cyber crime and advanced actors. We're seeing nation states, advanced actors, coming into the game and we're seeing advanced persistent threats becoming a reality. So that’s driving another dimension to the whole application security.

Last, but not least, is that we see a big shortage of cyber security talent in the market. Those are the trends that drives the need for a different look at application security from a lifecycle approach.

Gardner: Mark, anything to offer in terms of trends that you are seeing from HPE, perhaps getting more involved with security earlier in the process?

Painter: Gopal gave a very good and very thorough answer and he was dead-on. As he said, 80 percent of attacks are aimed at the application layer. So it actually makes sense to try to prevent those vulnerabilities.

Painter
We propose that people implement application security during the development cycle, precisely because that’s where you get the most bang for your buck. You need to do things across the entire lifecycle, and that includes even production, but if you can shift to the left, stop them as early as possible, then you save so much money in the long run in case you are attacked.

We do a study in conjunction with the Ponemon Institute every year, and since 2010, every year, it shows that attacks increase in frequency, they're harder to find, and they're also increasingly costlier to remediate. So it’s the right way to do it. You have to bake security in. You just can’t simply brush it on.

Gardner: And with the heightened importance of user experience and the need for moving business agility through more rapid iterations of software, is it intuitive to conclude that more rapid development makes it more challenging for security, or is there something about doing rapid iterations and doing security that somehow can go hand in hand, a continuous approach? Gopal, any thoughts?

Rapid development

Padinjaruveetil: There's a need for rapid applications, because we're seeing lot of innovations coming, and we welcome that. But the challenge is, how do you do security in a rapid world?

There is no room for error. One of the things from a trend perspective is IoT. One of the things I tell my clients is that if you look at traditional IT, we're operating in a virtual world, purely a virtual world. But when you talk about things like operation technology (OT), we're talking about physical things, physical objects that we're using in everyday life, like a car, your temperature monitors, or your heartbeat monitors. These are physical things.

When the physical world and the virtual world come together with IoT, that could have a very big impact on the physical layer or the physical objects that we use. For example, the safety of individuals, of community, of regions, of even countries can now be put in danger, and I think that is the key thing. Yes, we need to develop applications rapidly, but we need to develop them in a very secure way.

Gardner: So the more physical things that are connected, the more opportunity there is to go through that connection and somehow do bad things, nefarious activities. So in a sense, the vulnerability increases with the connectivity.

Padinjaruveetil: Absolutely. And that’s the fear, unless we change ways of developing software. There has to be a mindset change in how we develop, deploy, and deliver software in the new world.
There has to be a mindset change in how we develop, deploy, and deliver software in the new world.

Gardner: I suppose another element to this isn't just that bad things can happen, but that the data can be accessed. If we have more data at the edge, if we move computing resources out to the edge where the data is, if we have data centers more frequently in remote locations, this all means that data privacy and data access is also key.

How much of the data security is part of the overall application security equation, Gopal?

Padinjaruveetil: One of the things I ask is to define an application, because we have different kinds of applications. You have web services and APIs. Even though those are headless, we would consider that those are applications, and applications without data have no meaning.

The application and the data are very closely tied to each other, and what's the value? There's no real advantage for a hacker just to have an application. They're coming after the data. The private data, sensitive data, or critical data about a client or a customer is what they're coming at.

You bring up a very good point that security and privacy are the key drivers when we are talking about applications. That is what people are trying to get at, whether it's intellectual property (IP) or whether it’s sensitive data, credit card data, or your health data. The application and the data are tied at the hip, and it’s important that we look at both as a single entity, rather than just looking at the application as a siloed concept.

Solving problems

Gardner: Let’s look a little bit at how we go about helping organizations approach these problems and solve them. What is it that HPE and Capgemini have done in teaming up to solve these problems? Maybe you could provide, Gopal, a brief history of how the app security alliance with these two organizations has come about?

Padinjaruveetil: Capgemini is a services company, and HPE has great security products that they bring to the market. So, very early on, we realized that there's a very good opportunity for us to partner, because we provide services and HPE provides great security products.

One of the key things, as we move into agility or into application development, is that many of the applications have millions of lines of code. These are huge applications, and it's difficult to do a manual assessment. So, automation in an agile world and in an application world becomes important. That's a key thing that HPE is enabling, automation of security through their security products and application space. We bring the services that sit on top of the products.

When I go and talk to my clients about the HPE and Capgemini partnership, I tell them that HPE is bringing a very tasty cake, and we're bringing a beautiful icing on top of the cake. Together, we have something really compelling for the user.
At a high-level, what we're trying to do is expand the application security scope, and that basically includes three big buckets. Those are secure development, security testing, and then continuous monitoring and protection.

Gardner: Let’s go to Mark in describing that cake, I would imagine there are many layers. Maybe you could describe it for some of our listeners and readers who might not be that familiar with what those layers are. What are the major components of the transformation area around security that HPE is focused on?

Painter: At a high-level, what we're trying to do is expand the application security scope, and that basically includes three big buckets. Those are secure development, security testing, and then continuous monitoring and protection.

During the development phase, you need to build security in while the developers are coding, and for that specifically, we use a tool called DevInspect. It will actually show secure coding to a developer as he is typing his own code. That gets you much, much farther ahead of the game.

As far as security testing, there are two main forms. There is static, which is code analysis, not only for your own code, but open-source components and other things. In this day and age, you really are taking security into your own hands if you trust open-source components without testing them thoroughly. So, static gives you one perspective on application security.

Then there is also dynamic scanning, where you don’t have access to the code, and you actually attack the application just as the hacker would, so you get those dynamic results.

We have a platform that combines and correlates those results. So, you get to reduce false positives and you can trust the accuracy of your results to a much greater detail.

Sustained frequency

We also provide services, but the whole thing is that you have to do this with sustained frequency. Maybe 10 years ago, there was a stage-gate approach, in which you tested at the end of the development cycle and released it. Well, that’s simply not good enough; you have to do this on a repeatable basis.

Some people would probably consider that the developmental lifecycle ends once the product is out there in the wild, but if anything, my experience in the security industry has taught me that software plus time equals vulnerability. You can’t stop your security efforts just because something has been released. You need that continuous monitoring and protection.

This is a new thing in application security, at least if you call something that’s almost a few years old "new." With something called App Defender, you can actually put an agent on the application server and it will block attacks in real time, which is a really good thing, because it’s not always convenient to patch your software.

At HPE, we offer a combination of products that you can use yourself and we also offer hybrid solutions, because there's no such thing as one-size-fits-all in any environment.
Read the Latest Insights
On How to Protect
Your Enterprise Applications
We also offer expertise. Gopal was talking earlier about the lack of qualified candidates, and Forbes has predicted that, by 2019, a full quarter of cyber security jobs are going to be unfilled. Organizations need to be able to rely on technology, but they also need to be able to find experts and expertise when they need it. We do a lot at HPE; I will leave it at that.

Gardner: Gopal, how do these products, these layers in the cake, help with the shifting-left concept, where we move more concern about vulnerability and security deeper into the design, earlier into the coding and development process? Where do the products help with shifting left?

Padinjaruveetil: That’s a great question if you decompose or if you analyze application security as a cake. Security vulnerabilities in applications come from three specific areas. One is what I call design flaws, where the application itself is designed in a flawed manner that opens up vulnerabilities. So a bad design, in itself, causes security vulnerabilities.

The second thing is the coding flaws. Take an Apple iPhone or something like that. If you look at the design of an iPhone, the actual end product, there will be a very close match. A lot of problems we have in software industry are because there is a high level of mismatch between the design and the actual product itself as coded.

Software is coded by the developers, and if the developers aren't adding good code, there's a high possibility that that vulnerability is introduced because of poor coding.

Configuration parameters

The third thing is that the application isn't running in a vacuum. It's running on app servers and database servers and it’s going through multiple layers. There are a lot of configuration parameters, and if these configuration parameters are not set, then it leads to open vulnerability.

From a product perspective, HPE has great products that detect coding flaws. Mark talked about DevInspect. It's a great tool from a dynamics perspective, or hacking. There are great tools to look at all these three layers from a design flaw, from a configuration flaw, and a coding flaw.

As a security expert, I see that there is a great scope for tooling in the design flaw, because right now, we're talking about threat modeling and risk determination. To detect a design flaw requires a high level of human intelligence. I'm sure that in the future, there will be products that can detect design flaws, but when it comes to coding flaws, these tools can detect a coding flaw at 99 percent accuracy. So, we've seen a very good maturity in the application security areas with these products, with the different products that Mark mentioned.

Gardner: Another part of the process for development isn’t just coding, but pulling together components that have already been coded: services, SDKs, APIs, vast libraries, often in an open-source environment. Is there a way for the alliance between Capgemini and HPE to give some assurance as to what libraries or code have already been vetted, that may have already been put through the proper paces? How does the open-source environment provide a challenge, or maybe even a benefit, when done properly, to allow a reuse of code and this idea of componentized nature of development?
Another part of the process for development isn’t just coding, but pulling together components that have already been coded.

Padinjaruveetil: That’s a great point, because most of the modern applications are not valid applications. They talk with other applications. They get data from other applications, data through Web service interface, a REST API, and open source.

For example, if you want to do login, there are open-source login frameworks available. If there are things that are available, we'd like to use them, but just like custom code, open source is also vulnerable. There are vulnerabilities in open source.

Vulnerability can come from multiple things in an application. It can be caused by an API. It can be caused by an integration point, like a Web service or any other integration point. It can be caused by the device itself, when you're talking about mobile and all those things. Understanding that is a very critical aspect when we're talking about application security.

Gardner: Mark, anything to offer on this topic of open source and/or vetting code that’s available for developers to then use in their applications?

Painter: Well, it’s not an application, but it’s a good example. The Shellshock vulnerability was due to something wrong with the code of an open-source component, and that’s still impacting servers around the world. You can’t trust anybody else’s code.

There are so many different flavors of open-source components. Red Hat obviously is going to be a little better than your mom-and-pop development team, but it has to be an integrated part of your process for certain.

Cyber risk report

There is something Gopal was saying. We do a cyber risk report every year at HPE, and one of the things we do is test thousands and thousands of applications. In last year’s results, the biggest application flaw we found were basically configuration flaws. You could get to different directories than you should be able to.

Application security is not easy. If application security were easy, then we still wouldn’t be having cross-site scripting vulnerabilities that have been around almost as long as the web itself. There are a lot of different components in place. It’s a complex problem.

Gardner: So it’s important to go to partners and tried and true processes to make sure you don’t fall down into some of these holes. Let’s move on to another area, which is also quite important and difficult and challenging. That is the cultural shift, behavioral changes that are forced when a shift left happens, when you're asking people in a traditional design environment to think about security, operations, configuration management, and business-service management.

Gopal, what are some of the challenges to promulgating cultural and behavioral changes that are needed in order to make a continuous application security culture possible?

Padinjaruveetil: That’s a key aspect, because most of the application development is happening in a distributed team, and things are being assembled. So there are different teams building different things, and you're putting together the final application product and deploying it.
There are very good industry standards coming out, but the challenge is that having a policy or standard alone is not sufficient.

Many companies have now started talking about security policies and security standards, whether it’s Java development standards or .NET development. So, there are very good industry standards coming out, but the challenge is that having a policy or standard alone is not sufficient.

What I tell my clients is that any compliance without enforcement is ineffective. The example that I give is that we have traffic laws in India. If you've been to India and you look at the traffic situation there, it’s chaotic. Here, you see radar detection and automated detection of speed and things like that. So enforcement is a key area even in software development. It’s not enough to just have standards; you need to have enforcement.

The second thing I talk about is that compliance without consequence will not bring the right behavior. For example, if you get caught by a cop and he says, "Don’t do this again; I'll let you go," you're not going to change your behavior. If there's a consequence, many times that makes people change behaviors.

We need to have some kind of a discipline and compliance brought into the application development space. One of the things that I did for a major client was what I call zero tolerance. If you develop an application and if we did find a vulnerability in the application, we won't allow you to deploy it. We have zero tolerance on putting up unsecured code when we use one of these great products that HPE has.

Once we find an issue with a critical or a high issue that’s been reported, we won't let you deploy. Over a period of time, this caused a real behavioral change, because when you stop production, it has impact. It gets noticed at a very higher level. People start questioning why this deployment didn't go.

Huge change

Slowly, over a period of time, because of this compliance and because of the enforcement with consequences, we saw a huge change in behavior in the entire team, right from project managers to business analysts making sure that they are getting the security non-functional requirement correct, by the project managers making sure that the project teams are addressing it, the architect making sure the applications are designed correctly, and the testers making sure that the testing is correct. When it goes into an independent audit or something like that, the application comes out clean.

It’s not enough if you just have standards; you need to have some kind of enforcement with that.

Gardner: Mark, in order to have that sort of enforcement you need to have visibility and measurement. It seems to me that there's a lot more data gathering going on across this entire application lifecycle. And big data or analytics that we have in other areas are being brought into this fold.

Is there something about automation, orchestration, and data analytics that are part and parcel of the HPE products that could help on this behavioral shift by measuring, verifying, and then demonstrating where things are good or not so good?
Over the past 10 years in the security industry, we've changed from the idea of we're going to block every attack, to one that says the attackers are already inside your network.

Painter: One thing that HPE uses to build it in is secure coding, but also we talk about detect and response. We have an application product that integrates with our security and monitoring tool from ArcSight.

So you can actually get application information. Applications have been a typical blind spot for Security Information and Event Management (SIEM) tools, and you can actually get some of those results you are talking about from our SIEM technology, which is really cool.

Over the past 10 years in the security industry, we've changed from the idea of we're going to block every attack, to one that says the attackers are already inside your network. This is part of that detection. Maybe you didn’t find these. You can see active exploitation in other words, and then you can track it down and stop it that way.

Fifteen years ago, you had to convince people that they needed application security. You don’t have to do that know. They know they need it, but they just might not exactly know what they need to do.

It’s all about making this an opportunity for them to get security right, instead of viewing it as some sort of conflict between the need for speed and agile development and the need to release balanced against the needs of the enterprise to actually be secure and protect themselves from potential data breaches and potential data loss and all the compliance issues and now legal challenges from individual actors and all the way down the line.

Gardner: Gopal, before we close out, let’s look to the future a little bit. What comes next? Do you expect to see more use of data, measurement, and analytics, a science of development, if you will, to help with security issues, perhaps feedback loops that extend from development into production and back? How important do you think this use of more data and analytics will be to the improved automation and overall security posture of these applications?

Continuous improvement

Padinjaruveetil: You need to have data and you need to have measurements to make improvements. We want continuous improvement, but you can’t manage unless you measure. So we need to determine what are the systemic issues in application development, what are the systemic issues that we see constantly coming?

For example, if you're seeing cross-site scripting as a consistent vulnerability that’s coming across the multiple development team, we need to have some way to make sure that we're seeing patterns with the data and looking at how to reduce these major systemic errors or vulnerabilities in systems?

You will see more-and-more data collections, data measurements, and applying advanced methods to look at not just the vulnerability aspect of it, but also the behavioral aspect. That’s something that we're not doing, but I see a huge change coming where we're actually going to see the behavioral aspects being tracked with data in the application lifecycle model.
You need to have data and you need to have measurements to make improvements. We want continuous improvement, but you can’t manage unless you measure.

Gardner: Another thing to be mindful of is getting ready for IoT with many more devices, endpoints, sensors, biological sensors. All of this is going to be something coming in the next few years.

How about revisiting the skills issue before we sign off? What can organizations do about  maintaining the right skill sets, attracting the right workers and professionals, but also looking for all the options within an ecosystem, like the alliance between HPE and Capgemini. How do you see the skills problem shaking out over the next several years, Gopal?

Padinjaruveetil: If you look at many of the compliance frameworks, like NIST or ISO 27001, there's a big emphasis on control being put in place for security awareness and education. We're seeing a big drive for security education within the whole organization.

Then, we're seeing tools like DevInspect. When a developer writes bad code, if you give the feedback instantly that right now you have written a code that is bad, instead of waiting for three months or four months and doing a test, we're seeing how these tools are making changes.

So, we're seeing tools like DevInspect and helping developers to actually make themselves better code writers.

Painter: Developers are not natural security experts. They need help.

Padinjaruveetil: Yeah, absolutely.

Additional resources

Gardner: That was my last question to you, Mark. Can you suggest places that people can go for resources or how can they start to prepare themselves better for a number of the issues that we have discussed today?

Painter: It’s almost on an individual basis. There are plenty of resources on the Internet. We provide training as well. Web application security is actually one of the best places for organizations to leverage Capgemini to do their web application security testing.

The job crunch is the number one concern that enterprises have right now as part of security in the enterprise. There's a lack of qualified applicants, which says a lot when that’s a bigger concern than a data breach. We do a State of the SOC survey every year, and that was the result from the last one, which was a little surprising.
Read the Latest Insights
On How to Protect
Your Enterprise Applications
But apart from outsourcing, you need to find those developers who have an interest in security in your organization, and you need to enable them to learn that and get better, because that’s who is going to be your security person in the future, and that’s a lot cheaper and a lot more cost-effective than going out and hiring an expert.

I know one thing, and it’s a good thing. I tell my boss repeatedly that if you have good security people, you're going to have to pay them to keep them. That’s just the state of the market as it is now. So you have to leverage that and you have to rely on automation, but  even with automation, you're still going to need that expert.

We are not yet at the point where you can just click a button and get a report. You still need somebody to look at it, and if you have interesting results, then you need that person who can go and examine those. It’s the 80/20 rule. You need that person who can go to the last 20 percent. You're going to have automation, tools, and what have you to get to that first 80 percent, but you still need that 20 percent at the end.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Wednesday, April 27, 2016

How efficient cloud networks help the smallest companies do brisk business with the largest

The next BriefingsDirect technology innovation thought leadership discussion examines new ways for small businesses to make and manage the connections that matter to them most using cloud-based networks to bring intelligent buying and digital business benefits to any type of company.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the business of doing more commerce in the digital economy using cloud-based networks, please join me in welcoming Bob Rosenthal, Chairman and CEO of JP Promotional Products, Inc. in Ossining, New York, and Anne Kramer, CEO at Ergo Works, Inc. in Palo Alto, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Bob, what is JP Promotional Products? What do you do?

Rosenthal: JP Promotional Products is a distributor of imprinted promotional products. Anything you can put an imprint or logo on. We've had this company with my daughter for about 12 years now and we sell to small companies, large companies, anyone who buys promotional products from us.

Rosenthal
Gardner: Why is being digital, being on business networks, an important part of the way you find new clients?

Rosenthal: What this has given us is the ability to find the size of client that we could not ordinarily find. We're getting into large corporations, and it is very difficult for a small company to get access to a large entity. Being on a network like SAP Ariba and leveraging services like Ariba Discovery has gotten us into some of these very large corporations.

Gardner: Anne, tell us about Egro Works.


Kramer: Ergo Works is a small, woman-owned company based in Palo Alto, California. We're a full-service ergonomics company. We offer workstation evaluations and consulting, a complete line of ergonomic furniture, accessories and computer peripherals, as well as installation services. So, I would call it solution selling.

Gardner: And do you also share Bob’s challenge of trying to be seen and heard in a busy world, by big companies that perhaps don’t know about small vendors?

Kramer: Absolutely. It’s a challenge to get an audience with this group. They generally have established vendors, and trying to knock down those doors is challenging at best.

Gardner: We know that many of the buyers of goods are looking for increased automation. They're looking for intelligence in that network and the partnerships and ecosystem that they play in. So they want to find people like you that have goods and services for them. What was it that you had to do in order to then be seen and heard, be and recognized among them?

Rosenthal: We joined Ariba Discovery, and that gave us the ability to search for leads as well as respond to matched leads. As a matter of fact, one of the first ones I got was about a half an hour after I paid for Ariba Discovery. It was a Fortune 100 Company. They were looking for a thousand pair of imprinted socks, something I knew we could do. It was a no-brainer. We established our relationship with the procurement manager. They never bought the socks, but we have a relationship now, and without Ariba Discovery, there was no way we could have done that.

Gardner: And is geography a barrier for you or you can do business with anyone, anywhere?

Rosenthal: We can do business with anyone, anywhere. The bulk of it is in the Continental US. We can ship to England or Canada and we do bring some product in from China as well.

Gardner: And for you, Anne, tell us about what you needed to in order to find clients.

Challenge of growing

Kramer: We're located in Palo Alto, which is ground zero in Silicon Valley for ergonomics. So, we are well poised in that regard. Nonetheless, the challenge of growing a small business is ever present.

Kramer
One way that we've overcome that is to participate in online marketplaces. Specifically, what we're excited about now, and why I'm here today, is the Ariba Spot Buy Program. This is going to give us a direct access to large companies that have been challenging for us to get into. It’s an exciting opportunity. Unlike other marketplaces that are geared to one-off end users, Ariba is geared toward large corporations; so we're very excited.

Gardner: Can you give us a bit more about background and understanding of Ariba Spot Buy? These are not the usual contracts that are ongoing and repeatable, but are instances where there is a need, an ad-hoc need perhaps, in a large organization. A purchasing department has been tasked with doing this or maybe people directly in the company have got the authority to find and buy things on their own.

Kramer: That’s very well put. For example, we are currently an Ariba supplier with  several clients and we offer a static catalog. We often provide or make recommendations for products that are off catalog, and Ariba Spot Buy allows companies to buy products from vendors that they don’t currently have a contractual relationship with.

The niche that we're in is a relatively small niche. So it may not warrant a company wanting to put together a catalog. This is an opportunity for them to buy these products, yet stay compliant within the Ariba ecosystem.
So that’s what Ariba Spot Buy does. It allows companies to buy products that they don’t currently have a contractual relationship with.

Gardner: Now, of course, a big approach to finding things nowadays is through search on the web and having a good website, and getting good rankings on the search engines is a big part of that. But it strikes me that you're small, you're not going to get the kind of traffic on your website that might elevate you in those search results, and you are also highly customizable. So you're not just putting a big billboard up on the Internet, so to speak, and say, here we are.

You're offering custom types of things, with promotional products in your case, Bob, and you probably want to hear a lot about each customer and tailor your services to them. How do you overcome the challenge of not being able to put a billboard up on the Internet, but also maintain the advantage of having highly customized products, Bob?

Rosenthal: Our own website has hundreds of thousands of items on it. It’s an industry-based website. If you're searching for almost any product, you'll find it on our site.

In terms of how we got people to our site, we did invest some money a few years ago. We decided to go with what’s called Local Search. We put money into being on the first page in New York State, the Tri-State area, and that’s gotten us a few large accounts.

What we're looking for in Ariba Spot Buy is to bring in more business because a lot of our products are last minute. Someone will remember at the last minute, "Oh, I'm doing a trade show next week; I need a thousand widgets to give away. I forgot to buy them. I don't want to go through a contract." That's where I think Ariba Spot Buy will help us because we can deliver products in 24 hours if we have to.

Network advantage

Gardner: So there is an advantage to being in a business network versus just the worldwide wild web?

Rosenthal: Right. What that gets us is more targeted corporations, hopefully larger entities. Where a small corporation might buy 100 pieces, the big corporation is going to buy thousands of pieces. That’s why we've joined Ariba Discovery and are looking at Ariba Spot Buy.

Gardner: And I suppose, as someone in a selling position, you're also getting a lot more information about who you're selling to, given that they're in the network and you can see and access more about what they're looking for?

Rosenthal: That’s true, and where that helps is that we tend to add a lot of creativity to it. If we know who you are and what you do, we can make recommendations for certain kind of products. If you're a tractor company exhibiting at a show, maybe we'll suggest a squeeze toy in the shape of a tractor. Knowing who you are and where you are helps us with our creativity in suggesting products.
The ability to be on Ariba Spot Buy will give us the ability to interact with our customer to then have the opportunity to sell these more custom products and get into project-based opportunities.

Gardner: And for you, Anne, in the same vein, trying to be seen, heard, and understood in the Worldwide Web is perhaps a bit more daunting than on a business network. How do you overcome that need to customize and tailor your goods and services?

Kramer: Certain products lend themselves more to selling on the web than others, and same with online marketplaces. The visibility with  Ariba Spot Buy will give us the opportunity to interact with our customers to offer them custom products and get into project-based opportunities.

Gardner: We're also seeing from SAP Ariba the desire to bring more collaboration embedded and automated into these applications and services. Also, with Guided Buying, they're allowing the sellers to be part of an intelligence network, so that buyers can be led through the process and automation can be brought to bear. How do these new technological advantages affect you as a small businesses particularly, Anne?

Kramer: Technology helps us with new ways to bring our products to market and expose our offerings to a larger audience. That’s really the biggest benefit. 

In addition, it helps us to expand our current relationships with our Ariba buyers. They can now buy off-catalog, which is a win-win. Technology also impacts the products that we sell. As technology changes, the products change in response to the latest mouse design or the material that a wrist rest is covered in, maybe it's anti-microbial for instance. So technology has a huge impact on direct and indirect part of our business. 

Running the business

Gardner: Of course, it's important for small businesses to have visibility into cash flow, when to expect payments, and how to bill accurately and appropriately. Any thoughts, Bob, on how this business network for you also adds to your own ability to run your business properly?

Rosenthal: In terms of technology, the biggest issue with us is the logo. Anyone can say they want a Bic pen. Where the technology should help us is in getting the art files from one point to the other and knowing, as far as things like cash flow, who we're dealing with, that it's a large corporation. Some use POs, some don't, for these type of buys. It gives me more comfort that we are going to get paid.

It's difficult to ask General Motors for a deposit for a $1,000 order, but we might ask the insurance broker down the street for that. So that comfort level of knowing we should be paid on a certain date is a big advantage.

Gardner: Anne, the same thing. Business visibility is important. Is there something about a business-network approach that's beneficial to you in being able to run your business well?

Kramer: Well, specifically what I am excited about with Ariba Spot Buy is that all the purchases are made using a credit card, which we love because it helps us control our cash flow. We don't have to go chasing after past-due invoices, and that time can be better spent selling more products. We love the fact that it's all credit-card based.
What I am excited about with Ariba Spot Buy is that all the purchases are made using a credit card, which we love because it helps us control our cash flow.

Gardner: Are there any specific examples of actual customers that you found through the Ariba Discovery process in this online marketplace that would illustrate some of these points? You don't have to name them necessarily, but maybe walk us through how it's worked and how that's different from the other approaches that you've had to find in customers, Bob?

Rosenthal: Well, the big account that we got, which I can't name, has turned into a huge account for us. We've established a relationship with the procurement people, and I think that relationship has built this business with them over the last 18 months, because they have a confidence level in us, and we are confident in them that, a) we're going to get paid, and paid on time, and b) it's a continuing relationship.

We do a lot of one-offs. We get a hit on our website, I need something tomorrow, can you get it? We never hear from the people again but we get an order, which is great; we do a lot of that. But we also try and establish relationships and that's what we get out of Discovery so far.

Gardner: As a small-business person myself, I know that you don't want to push that rock up the hill every month. You want to have the recurring dependable revenue; it's super important, right?

Kramer: Right. Ariba Spot Buy is an opportunity for ongoing and repeat business from companies participating in this technology.

Gardner: But this allows you to get the best of both worlds, which you can discover and find new interesting clients, but you can also maintain a steady flow from, from your installed base.

Kramer: That's right. This technology offer us an opportunity to engage new corporate customers and get paid quickly with credit card payments.

Gardner: Thank you, Bob, and if people want to learn more about your organization, how might they do that?

Rosenthal: Our website is www.jppromoproducts.com or feel free to call us at 1-800-920-3451.

Gardner: Anne, how could organizations learn more about your company?

Kramer: They could go to our website at www.askergoworks.com or our toll free number 866-ASK-ERGO.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Thursday, April 21, 2016

Intralinks uses hybrid cloud to blaze a compliance trail across the regulatory mine field of data sovereignty

The next BriefingsDirect hybrid computing case study discussion explores how regulations around data sovereignty are forcing enterprises to consider new approaches to data location, intellectual property, and cloud collaboration services.

As organizations move beyond their on-premises data centers, regulation and data sovereignty issues have become as important as the technical requirements for their cloud infrastructure and applications.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how organizations have been able to get the best of data control and protection -- along with business agility -- from hybrid cloud models, we're joined Richard Anstey, CTO at Intralinks, and he's based in London. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about the trends that make data sovereignty so important as a consideration when organizations look at how and where to manage, house, and store their data.

Anstey: This is becoming a much more important topic. It has obviously been in the news very much recently in association with the Safe Harbor regulation having been effectively annulled by the European courts.

Anstey
This is the regulators catching up with the Internet. The Internet has been somewhat unregulated for a long time, and quite rightly, the national and regional authorities are putting in place the right protections to ensure that citizens’ data are looked after and treated with the respect they deserve.

So it's becoming more important for companies to understand the regulatory environment, even those organizations that did not previously feel that they were subject to such regulation.

Gardner: So the pendulum seems to have swung from the Wild West Internet toward greater security oversight.  Do we expect more laws across more jurisdictions to make placement of data more restricted? Are we seeing this pendulum swing more toward regulation?

Anstey: Yes, it’s certainly swinging that way, and the big one for the European Region of course is the General Data Protection Regulation (GDPR), which is the European Commission initiative to unify the regulations, at least across the European Union. But the pendulum is swinging toward a greater level of regulation.

Gardner: How about in Asia-Pacific (APAC) and North America, what’s happening there?

Global issue

Anstey: Post-Snowden, this has become much more of an issue globally, and certainly across APAC there have been some very specific regulations in place for sometime, Singapore Banking Authority being the famous one, but globally this is becoming much more of an important issue for companies to be aware of.

Gardner: So while the regulatory atmosphere is becoming more important for companies to keep track of, its also more onerous for them as businesses to comply. The Internet is still a very powerful tool and people want to take advantage of cloud models and compliant data lifecycle models. Tell us about Intralinks, and about how organizations can have the best of both protected data and cloud models.

Anstey: Intralinks is in the fortunate position of having been offering cloud services in highly regulated environments for almost 20 years now. Back when we were founded, which by the way was really before most people would do their shopping online, Intralinks was operating things called Virtual Data Rooms to facilitate very high value, market-moving transactions through effectively a cloud service. We didn’t call it cloud at that time; we called it software as a service (SaaS).
HPE Cloud
HPE Helion
Click Here to Learn More
But Intralinks has come from this environment. We've always been operating in highly regulated environments, and so we're able to bring that expertise that we have built up over the last 20 years or so to bear on solving this problem for a wider range of organizations as the regulation really steps in to control a greater part of the services delivered over the Internet today.

Gardner: In a nutshell, how is it that you're able to do, in a highly regulated environment, what people think of as putting everything in a cloud?
Physical location may be one thing to think about, but there's another thing called logical location.

Anstey: Well, in a nutshell, it may be tricky, because there's lot to it. There's a lot of technology that goes into this. And there are a lot of dimensions around which you need to consider this problem. It's not just about the physical location of data. Although that may be important, there are other dimensions. Physical location may be one thing to think about, but there's another thing called logical location.

The logical location is defined as the location of the control point of the encryption as opposed to the location of highly encrypted data, which many people would argue is somewhat irrelevant. If it's sufficiently encrypted, it doesn't matter where it is. The location of the key is actually more important than who controls that key, and more important than where your encrypted data lives.

In fact, we all implicitly accept that principle. When you use your online bank, you don't know the route that that information takes between your home computer and the bank. It may well be routed across the Atlantic, based on conditions of the Internet. You just don't know, and yet we implicitly accept that because it's encrypted in transit, it doesn't really matter what route it takes.

So there is the physical location and the logical location, but there is still also the legal location, which might be to what jurisdiction this information pertains. Perhaps it pertains to a citizen of a certain country, and so there is a legal location angle to consider.

And there is also a political location to consider, which may be, for example, the jurisdiction under which the service provider is operating and where the headquarters of that service provider is.

Four dimensions

There are four dimensions already, but there is another one as well, which is the time dimension. While it may be suitable for you to share information with a third party in perhaps a different jurisdiction for a period of time, the moment that business agreement comes to an end, or perhaps the purpose or the project for which that information was being used has come to an end, you also need to be able to clear it up.

You need to tidy up and remove those things over time and make sure that just because that particular information-sharing activity was valid at one point, it doesn't mean that that’s true forever, and so you need to take the responsibility to clear it up. So there are technologies that you can bring to bear to make that happen as well.

Gardner: It sounds as if there is a full spectrum, a marketplace, of different solutions and approaches to suit whatever particular issues an organization needs in order to satisfy the regulatory, audit, and other security requirements.

Tell us about how you have been working with HPE to increase this marketplace and solve data sovereignty issues as they become more prominent in more places.

Anstey: The thing that HPE really helps us with is the fact that while we've been able for quite a long time to have data centers in multiple regions -- as the regulation and the requirements of our customers grow -- we need to be even more agile with bringing new workloads up and running in different locations.

With HPE Helion OpenStack we're able to spin up a new environment -- a new data center perhaps, or a new service -- to run in a new location far more quickly and more cost effectively than we would otherwise be able to if we were starting from the ground-up.
HPE Cloud
HPE Helion
Click Here to Learn More
Gardner: So it's important to not just be able to take advantage of cloud conceptually, but to be able to move those cloud data centers, have the fungibility, if you will, of a cloud infrastructure, a standardized approach that can be accepted in many different data-center locations, many different jurisdictions.

Is that the case, and what can we expect for the depth and reach of your services? Are you truly global?

Anstey: We are certainly truly global. We've been operating right across the world for a number of years now. The key elements that we require from this infrastructure are things like workload portability and the ability to plug into additional service providers at any time we need to be able to create a truly distributed platform.

In order to do that, you need some kind of cloud operating system, and that's what we feel we get from the HPE Helion OpenStack technology. It means that we have become much more portable to move our services around whenever we need to.

Gardner: When you're an organization and you know that there's that data portability, that there's a true global footprint for your data that you can comply with the regulations, what does that do for you as a business?

How does this, from a business perspective, benefit your bottom line? How does it translate into business terms?

Enormous uncertainty

Anstey: The key thing to realize is that there has been an enormous amount of uncertainty, and in a way, the closure of the Safe Harbor agreement has been a good thing in that there was always some doubt over its applicability and its suitability. If you'll forgive the pun, there was a cloud hanging over it. When you remove that, you still have to get a little bit more certainty, of ... "Well, that thing definitely doesn't work and so we need to have a different structure."

Nevertheless, what happens in that environment of uncertainty is that people start to play it safe and they start to think, "This cloud thing is a bit scary. Maybe we should just do it all ourselves, or maybe we should only consider private cloud deployments." When you do that, you cut off the huge options and agility that's available from using the cloud to its full extent.

What would be a bad thing is if, as the pendulum swings, as you described, toward regulation, people retreat and give up and say, "This Internet thing, we don’t want to do that. We're going to reverse the trends and the huge technological advances that we've been able to leverage over the last 10 years of growth of cloud."

We believe that by building technology in the way that we are able to construct it, with all of those options associated with ways in which you can demonstrably prove that you are responsibly looking after data over time, you don't have to sacrifice the agility of the cloud in order to adhere to the regulations as they come in.
The net is cast wider and wider for the regulation, to the point where any company that deals with personal data and needs to use that data for legitimate business purposes will now be covered by regulation.

Gardner: We've talked about data sovereignty from a geographic perspective, but how about vertical industries? Are there certain industries that require that global reach, but also need to be highly regulated?

Anstey: The vast majority of the global banks are our customers already. We also have a very large footprint in the life sciences, which often has a similar nature in terms of the level of regulation, especially if you're dealing with patient data in the field of clinical trials, for example.

But the reality is that, as this pendulum swings, the net is cast wider and wider for the regulation, to the point where any company that deals with personal data and needs to use that data for legitimate business purposes will now be covered by regulation. This isn't just guidance now.
HPE Cloud
HPE Helion
Click Here to Learn More
When we get through to the next level of EU regulation, there are some serious fines, including criminal penalties for executives and fines of up to two percent of global revenue, which really makes people wake up. It will make a far wider group of companies wake up than the previous ones who knew that they were operating in a strict regulatory framework.

Gardner: So in other words, this probably is going to pertain to many more industries than they may have thought. This is really something that’s going to hit home for just about everybody.

Anstey: Absolutely. Every industry becomes a regulated industry at that point, when to do business you need to handle the type of data that gets covered by the regulation, especially if you are operating in the EU, but as we described, with more to follow.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

 

Wednesday, April 20, 2016

ITSM automation and intelligence gains deliver self-service help to more users

The next BriefingsDirect IT support thought leadership discussion highlights how automation, self-service and big data analytics are combining to allow IT help desks to do more for less.

We'll learn how automation and ITSM-driven insights endow help desk personnel with more knowledge and provide a single point of support for end users, regardless of their needs while still catering to their preferred method of help.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to share the latest on how IT support is advancing in the era of bring your own device (BYOD), cloud, and tight budgets, are three experts, David Blackeby, Program Solution Owner for Cloud Services at Sopra Steria, based in the UK; Diana Wosik, Group Program Manager at Sopra Steria, based in Poland, and Mark Laird, Group Technical Architect at Sopra Steria, based in the UK. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s start at a high level and talk about how support has changed, and why enabling self-service is so important nowadays. Mark, why is self-service such an important issue when it comes to IT help desk?

Laird: For us, there are probably a number of issues. We have a range across our customer base, from millennials, who are used to dealing with websites, mobile, tablets, who really don’t want to call a call center, and don’t want to end up talking to somebody on the phone, through to the legacy users who are much more used to picking up the phone, asking for help, and talking through a problem.

So they're looking for a more human approach, human interaction, versus the millennials who want to fix it themselves, want to do it quickly, and really don’t want to talk to somebody about it. That’s introducing a range of problems and challenges.

Gardner: It sounds as if you need to deliver support in a spectrum of ways, but perhaps with a common core to that support function.

Underlying answer

Laird: The underlying answer to the problem, whatever the problem is, is likely to be the same. If you have a log-on issue, it will be a password reset or an account issue. It’s how you get that information out to the person who has the challenge.

Laird
If it’s a person on the phone, it's easy enough to talk them through it. But if you have somebody who is coming through a self-service portal, you have to provide them with that same information. So yes, at times, you connect a single call, a single database, and send your knowledge environment to a range of callers.

Gardner: David, we're being called on here to deliver support across the spectrum of modalities, methods, or even latency, but at the same time, many of the world governments are asking for austerity and savings in their budgets for IT. How are we able to reconcile this need for more variety and the delivery of help desk services, but cutting costs at the same time? Is there any way to reconcile them?

Blackeby: It’s part of the core challenge in the current world with austerity, where both our public and private customers are looking at how they can do more for less money.

IT has continuing cost pressures to reduce cost and overhead of providing IT.  At the same time, we talk about new methods of self-service, different types of platforms and different types of devices and this multi-channel effect that costs time, effort and money to invest in these technologies.

Blackeby
That’s the underlying driver for how it comes down to the service provider to do that. The only way we can do that is looking at industrializing that service delivery and automating processes, moving activities that may have previously been done by Level 2 and Level 3 resources. We're looking at how we can move those to cheaper or lower-cost resources, such as a service desk, or in an ideal world, remove them entirely from the cost chain and drive the automation. So the activity increases the speed and the agility while reducing the cost of delivering the service.

Gardner: Diana, another variable in the mix here is the increased use of mobile devices, of fluidity of the user in terms of their geography, their location, even the time of day that they might be working, and of course there is a plethora of devices, if you want to bring your own device organization. How is mobility affecting this equation for a more complex approach to help desk?

Wosik: Mobility is very important nowadays, because everybody uses mobile devices, every single day. We need to ensure a single point of contact, so they all can approach their help desk at
Wosik
any time they need, and they need the availability 24×7 for that.

Gardner: So, we've established that we have a need for more variability, addressing more types of help from more types of users. Tell me a bit more, Mark, about automation and self-service and how they support one another? What is it about automating processes that endows the user with more access to help, but then maybe that same feedback loop between the user and the support infrastructure can be brought to bear on future issues?

Laird: Automation is doing the same thing in a repeated, controlled fashion. Whether it’s a password reset or the delivery of a service or a server, what you're doing is scripting. You're putting into a workflow a process that a user can call on. Whether that user is an end user, an end customer, or in fact one of the operations team, it allows them to do that fairly standard process in a repeated quality controlled fashion.

And that can allow lower cost, potentially, as David said, bringing the tasks from maybe a qualified Level 3 expensive support person into an operations center, or in fact, maybe on to the self-service portal, where you're not having to give access to systems to end users, but you are allowing them to run a script.

Double benefit

Gardner: David, perhaps you could help me understand why self-service is a benefit to both the receiver of the help, the end user, as well as the organization. What is it about self-service that refines process and benefits the deliverer of the help, but at the same time, gives more speed or perhaps options to the receiver of the help?

Blackeby: Essentially it supports both sides of the equation. From an end user perspective, it’s that instant gratification, I can go into a centralized portal. I can do my search or raise my request and I can be instantly satisfied with the response. I could be presented with a knowledge article that tells me how to fix my particular issue.

If I'm requesting a new service to be delivered through orchestration in the back end, I can make my request, and the orchestration comes in and drives the automated delivery of that service to me. So it increases the agility for the user and it reduces delays.

From the other side of the equation, looking at it from a service provider’s perspective, the more work the user can do themselves takes cost away from us as a service provider.

Historically, a user would have called the service desk, so as a part of that conversation you need to understand who the user is to provide them the service. Make sure it’s a service that they are potentially allowed to have and sort of help through the process. That means that we need a body to answer the phone, and the amount of time that we spend on a typical call from the user drives the cost from a support center perspective.
That reduces the handling time by our agents and by the people who are delivering them the service.

Even if you have a scenario where a user using the portal today, and still need ultimately a human interaction to deliver that service, we already know who they are, and will have asked relevant questions upfront which means we don’t have to ask the questions later on down the line when we try to deliver a service. That reduces the handling time by our agents and by the people who are delivering them the service.

Gardner: Before we dig into the how you do this, now that we have established why it's an important new aspect of helpdesk, Diana, perhaps you can tell us a little bit about Sopra Steria, the organization, and to what degree they are supporting help desks in your markets?

Wosik: I can give you a good example of how it works in Poland and how the automation helps us out regarding the functionality of help desk.

We apply quite a few solutions, like virtual machine (VM) provisioning that has been automatically provisions the machines aligned to customer needs. There is a monitoring tool that is automated. So not only we monitor whatever is going on, but we're also able to answer the needs very quickly, thanks to our automation services.

And then there's the thing regarding the automatic deployment of our releases. Whenever there's a new release of the system, we don’t need a bunch of people who are going to work on it. We can also deploy it very quickly in production, and that helps us to bring the solution as quickly as possible to our customer.

Higher-level view

Gardner: Could you give us a higher-level view of Sopra Steria, the organization, and to what degree help desk support is part of a larger portfolio of services?

Laird: We're a European IT company. We run IT for a wide range of European customers. We deliver services. We write software. We do business process outsourcing. Essentially, if there's a computer involved in there somewhere, that’s what we do.

We have a presence in 27 countries across Europe, in India, and then smaller offices in Singapore, Hong Kong, and China. We have 36,500 staff, and an annual turnover of about 3.5 billion euros. So, we're a reasonably large company, one of the top 10 European IT companies.

For us, the service desk is the single point of contact. For all of our customers, that is their point of contact with us, whether it’s through the Global Delivery Center in Poland, where we're offering French, German, English, small amounts of Spanish and Italian, or through some of the in-country service desks, such as the ones we have in France and the UK. So that is our single point of contact and it’s of key importance to us.

Blackeby: Just to follow on from that, the key piece of that is that it’s an intelligent service desk as opposed to a help desk. It’s really about having the phones manned by intelligent people who are able to both try and fix or resolve issues straight away, as opposed to just logging a call, creating a ticket, and passing it off to someone else.

Gardner: How is it that we're providing those individuals on the front line with better knowledge? Are they getting more tools? Are they getting more data? Is this really just correlating a single point of access to the existing data? Is it all of the above? How do we empower those people to do this difficult help desk job better?

Blackeby: In the same way that we try to have a single point of entry for users, for a portal, it’s really the same piece for our support staff as well.

While there are many systems that underpin our service delivery, the key element we try to strive for is that the operators have a single place to work. It’s very much thorough the integration of various systems and data sources into a centralized repository, so that the person that’s trying to act on a ticket, request, or other activity has everything they need in one place, so they can immediately see what the issue is, see what the request is, and then deliver the service to that end user.

Gardner: It strikes me that whether it’s a help desk’s person or the end user, the more they use this, the more the data can be collected, the more knowledge can be harnessed from the interactions, and therefore brought back through a feedback loop into the next level of support.

Is the cost savings on this ultimately about you're better able to understand the market because of the self-service, because of these portal approaches? Is that a big part of it?

Key items

Blackeby: It feeds into that. If you're looking at industrializing or automating, you're really looking for repeatable activities that are done time and time again. The data helps to support that. It identifies suitable candidates that are high volume, high throughput transactions that are really the key things that you want to focus on in terms of introducing automation into the environment, or automation into task elements in a given process. So, over time, it’s pretty much what we are doing.

As Mark mentioned, we're a managed service provider (MSP), providing the services across many customers. So, a lot of the economies of scale we get are best practices that we apply in one account or particular scenarios or issues that we see in one, we can see correlations in other customer accounts as well. So we can bring those efficiencies and bring that investment we make and automation through our back office processes to benefit multiple customers.

Wosik: What is very well known right now is big data and smart analytics that will help us to gather all the information from our customers, so the more tickets and the more incidents are logged, the more information you can gather as well. This is gathered and analyzed. This is when we can provide more accurate and quicker answers to our customers. It’s something that has really impacted our quality of service.

Gardner: Let’s look also back to the systems, when we think about gathering information, more and more big data gathered from logs and other output data from the systems themselves, from the platforms. How are you at Sopra Steria managing the knowledge gathering from your systems and then applying that into this other knowledge base about the activities on your help desk and from the self-help portal?
What is very well known right now is big data and smart analytics that will help us to gather all the information from our customers, so the more tickets and the more incidents are logged, the more information you can gather as well.

Laird: We're looking at some of the new technologies around smart analytics and big data, but we're starting with some of the simpler approaches, which as David alluded to and as Diana mentioned earlier, are just the simple high-volume transactions, the things that we do on a regular basis that are maybe quality issues or maybe they are just time consuming, but those are the key ones we're after.

Then, over the next three to six months, as we move into some of the newer technologies around smart analytics, for example, we'll be taking some of the incidents and things coming into service desk, into the service management system, and looking at those and doing problem management on them.

Have we suddenly got an influx of incidents around our exchange platform? Is that actually indicating that there is an underlying problem or an underlying system error that we need to fix?

It’s starting to link all the various systems, whether it’s the business service monitoring system to the back end that the operations teams are using, or the service management platforms at the front that our service desk people are using, pulling all those together, tying them in with, for example, the configuration management platform, so that people are seeing the same information, both from a front-end user impacting view, or from a back-end infrastructure and service view.

Gardner: And I should think that would also help in more agility to do root-cause analysis and making it faster to time for resolution.

Automate and fix

Laird: Exactly. That back goes back to when we fix problems, close incidents, and if there's a resolution in there, doing the analysis on them to identify common fixes. If an incident comes in or a particular type of incident comes in and we always do the same thing to it, we can automate that. We can actually either get the service desk or help desk people access to that quick fix or just automate it right at the start, so when that issue occurs, we automate and fix.

In some cases, that’s moving out of the customer’s view completely. We're fixing it almost before there's an impact.

Gardner: We've talked a bit about making these help desk approaches better from the end-user perspective, empowering the personnel in the help desk organization itself, and finding some new technologies and analysis benefits to propel that forward, but I would like to go back to the issue of cost.

How are we wringing out more cost from this process, perhaps things like identifying automation and what’s called shift left, better or earlier in the process. So, where are we targeting to get the most results when it comes to cost reduction in all of this?

Blackeby: It really talks about how people do transactions, what things are continually occurring that have a high amount of touch points to them. Some of that comes out through time.
These days, more and more commonly, we can use software distribution, or automated software push tools, that don’t require human interaction at all.

One of the challenges we have when we take on a new customer is that you don’t have the excellent benefit of hindsight around how the organization works and what their common problems are. So, as we take on a new customer or a new contract, we have the ability to go and talk to their existing service provider or their in-house person. A lot of that comes out over time.

There are some standard things that we can recognize, because we have similar customers in similar marketplaces or industries and things that we would expect to get from the outset, and by looking at things like password reset tools and things like that are common and applicable across all types of clients.

Then, it’s a case of looking at your volumetrics over time, your repeatable activities, incidents and requests, identifying how can we drive the agility and improve the service levels that we're delivering, and at the same time, reduce cost.

Take a simple thing like software deployment to users machines, historically, that might have been a call to the service desk. They might have dispatched a desk-side engineer or used remote control to be able to connect with a user’s device to go and install the software.

These days, more and more commonly, we can use software distribution, or automated software push tools, that don’t require human interaction at all. We can automatically deploy software to the user.

Zero-touch environment

That moves into that zero-touch type of environment. Through a portal request, we can manage the workflow around any approval activities. Then once fully approved, through the orchestration at the back-end, we can interface by software deployment solution to automate the delivery of that software to that endpoint device.

And we support many different types of devices now. We've seen more and more cases where not only are we talking about physical desktops or laptops, but also around how we manage mobile devices and tablet type devices as well, using mobility and mobile device management solutions.

Gardner: Let’s look at some of these solutions in practice. Sopra Steria has been doing this for some time and across a large marketplace. Do you have any examples that demonstrate when you can do this well that you get those benefits of self-help, common core data, more knowledgeable help desk, reduce costs, all at the same time?
It probably took two or three days to code the solution, but we're saving a significant amount of time every day.

Laird: One of the solutions we looked at in Poland, certainly around automation, was a really simple challenge that the operations team had as part of our Polish operation. Every morning, backups from a particular customer was taking them in the region of one hour to produce a backup report, look at the backup that had failed, re-run backups as appropriate, and then if backups had failed maybe consistently for a couple of days, escalating that out to support team.

We automated the whole thing. It’s all automated using HPE Operations Orchestration. The whole process now takes one of the team about five minutes in the morning, and it’s really a case of checking the output from the system.

So, we've saved somewhere in the region of just under an hour everyday for one person. It probably took two or three days to code the solution, but we're saving a significant amount of time every day. We're getting a much better quality report, and we're able to pass that information out to our second-line and third-line teams earlier in the day, it gives them much more time to fix things.

One of the things that we've looked at now is automating the re-run of backups overnight. Rather than letting them go to maybe two or three days, they're fixed overnight, and we run them within the backup window. It's improving quality to the customer and a having significant impact on savings to the operations team.

Gardner: You mentioned the use of the HPE tools. Are there any other HPE platforms or approaches that are helping you bring in this common data. We talked about the analysis earlier that also helps in this equation of doing more with less.

HPE partner

Laird: We're an HPE partner. We have been for over 10 years now, and we have quite a range of HPE tools across the portfolio, whether that’s from things like the Application Lifecycle Manager, through to HPE Service Manager.

We also have solutions like OMi doing things like event correlation, where we have events coming in from the monitoring solutions, whether that’s from HPE SiteScope or Operations Manager or from third party tools, like SCCM and some of the Nagios tools.

OMi is correlating those events and passing through to the service desk and the operations center the ones that actually need to be looked at. We're filtering out more than 50 percent, 60 percent of the alerts. It reduces our cost. We're filtering those alerts out at a much earlier point in the chain, and with that, we're only raising incidents for ones that actually need to be escalated up to the teams.

We're using tools and technology, to keep costs down and reduce the costs as far as we can.
One of the challenges that are coming more to the forefront these days is probably the adoption of cloud services. It’s a disruptive influence on traditional IT and how IT is delivered.

Gardner: So as we think about being able to future-proof the support services, and by that I mean being able to adapt to a millennial audience, more distribution points, more types of help desk and automation, and that single portal, we also need to be thinking about being backwards compatible. Some organizations do want more of that human touch, the interactions, and perhaps some of the government organizations are interested in that as well.

What is it about the future direction of your services at Sopra Steria, some of the tools and technologies that you are employing from HPE, that allows you to feel confident about being both future proof and backwards compatible for your support?

Blackeby: One of the challenges that are coming more to the forefront these days is probably the adoption of cloud services. It’s a disruptive influence on traditional IT and how IT is delivered.

It’s a challenge for us the service providers to adapt to these. You're talking about environments that can be built in minutes, bringing a whole new way of working, very fluid environments with auto-scaling where the number of resources that we are supporting and managing is growing and shrinking dynamically over time. So that’s really had a big sort of impact on how we deliver service.

We've recognized this and are looking at how we transform the service delivery. We're becoming more reliant on the data that supports the service. So it’s very much around how we manage what’s out there, with a heavy reliance on things like configuration management systems, and discovery of IT resources.

As Mark said, there are things like event correlation, looking at patterns, trends and events so that we can increase the agility and really manage much higher volumes of applications, of servers and of users with a smaller number of people or with the same number of people.

Gardner: It is very exciting a lot is going on.

Tools and technologies

Blackeby: As a ratio you might have a scenario of a support person looking after an average 40 servers to now having to deal with realms of managing, so there are a 100-plus servers, but it’s only through the deployment of the tools and technologies that we can do that.

But at the same time, we still have a large legacy estate and legacy clients and we still need to support. So it’s really looking at how come we engineer our processes so that irrespective of what we are talking about legacy physical server workloads or perhaps on premise virtualized workloads as well as things that might be spun up inside Amazon Web Services or in Microsoft Azure public cloud environments that we provide that consistent level of service and service delivery irrespective of where the service is located or in which format it is delivered back to the customer or users.

Gardner: When I speak to developer organizations and IT production organizations operations, they're seeing a compression and a large degree of collaboration between development and operations. Thus, the DevOps trend.
But at the same time, we still have a large legacy estate and legacy clients and we still need to support.

But when I listen to you, I'm hearing also a compression between operations and help desk in such a way that it benefits the entire IT process in a more automated and the more software-defined and the more data that’s made available, the tighter that compression seems to get. Am I perhaps describing seeing this idea of help desk, support and operations becoming more collaborative, more tightly aligned?

Laird: The whole concept of the operations team being hidden away in a back room and the service desk being the public face is changing. They're becoming much more tightly aligned. Things that the operations team is doing have an almost immediate impact on what the service desk is looking at, and the service desk needs to have access to really all the information the operations team has got.

When the user is on the phone and has a problem with a service, it’s good if the service desk can actually say, "Yes, we know there's a problem and we know what the problem is. We have an estimated fix time of 15 minutes." That gives the user the warm feeling that you're in control and you know what you're doing.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: