Thursday, October 27, 2016

ServiceMaster's path to an agile development twofer: Better security and DevOps business benefits

The next BriefingsDirect Voice of the Customer security transformation discussion explores how home-maintenance repair and services provider ServiceMaster develops applications with a security-minded focus as a DevOps benefit.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript o download a copy.

To learn how security technology leads to posture maturity and DevOps business benefits, we're joined by Jennifer Cole, Chief Information Security Officer and Vice President of IT, Information Security, and Governance for ServiceMaster in Memphis, Tennessee, and Ashish Kuthiala, Senior Director of Marketing and Strategy at Hewlett Packard Enterprise DevOps. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jennifer, tell me, what are some of the top trends that drive your need for security improvements and that also spurred DevOps benefits?

Cole: When we started our DevOps journey, security was a little bit ahead of the curve for application security and we were able to get in on the front end of our DevOps transformation.


The primary reason for our transformation as a company is that we are an 86-year-old company that has seven brands under one umbrella, and we needed to have one brand, one voice, and be able to talk to our customers in a way that they wanted us to talk to them.

That means enabling IT to get capabilities out there quickly, so that we can interact with our customers "digital first." As a result of that, we were able to see an increase in the way that we looked at security education and process. We were normally doing our penetration tests after the fact of a release. We were able to put tools in place to test prior to a release, and also teach our developers along the way that security is everyone's responsibility.

ServiceMaster has been fortunate that we have a C-suite willing to invest in DevOps and an Agile methodology. We also had developers who were willing to learn, and with the right intent to deliver code that would protect our customers. Those things collided, and we have the perfect storm.

So, we're delivering quicker, but we also fail faster allowing us to go back and fix things quicker. We're seeing an uptick in what we're delivering being a lot more secure.

Gardner: Ashish, it seems obvious, having heard Jennifer describe it, DevOps and security hand-in-hand -- a whole greater than the sum of the parts. Are you seeing this more across various industries?

Stopping defects

Kuthiala: Absolutely. With the adoption of DevOps increasing more across enterprises, security is no different than any other quality-assurance (QA) testing that you do. You can't let a defect reach your customer base; and you cannot let a security flaw reach your customer base as well.

If you look at it from that perspective, and the teams are willing to work together, you're treated no differently than any other QA process. This boils not just to the vulnerability of your software that you're releasing in the marketplace, but there are so many different regulations and compliance [needs] -- internal, external, your own company policies -- that you have to take a look at. You don't want to go faster and compromise security. So, it's an essential part of DevOps.

Cole: DevOps allows for continuous improvement, too. Security comes at the front of a traditional SDLC process, while in the old days, security came last. We found problems after they were in production or something had been compromised. Now, we're at the beginning of the process and we're actually getting to train the people that are at the beginning of the process on how and why to deliver things that are safe for our customers.

Gardner: Jennifer, why is security so important? Is this about your brand preservation? Is this about privacy and security of data? Is this about the ability for high performance to maintain its role in the organization? All the above? What did I miss? Why is this so important?

Cole: Depending on the lens that you are looking through, that answer may be different. For me, as a CISO, it's making sure that our data is secure and that our customers have trust in us to take care of their information. The rest of the C-suite, I am sure, feels the same, but they're also very focused on transformation to digital-first, making sure customers can work with us in any way that they want to and that their ServiceMaster experience is healthy.

Our leaders also want to ensure our customers return to do business with us and are happy in the process.  Our company helps customers in some of the most difficult times in their life, or helps them prevent a difficult time in the ownership of their home.

But for me and the rest of our leadership team, it's making sure that we're doing what's right. We're training our teams along the way to do what's right, to just make the overall ServiceMaster experience better and safe. As young people move into different companies, we want to make sure they have that foundation of thinking about security first -- and also the customer.
Learn More About DevOps
Solutions that Unify
Development and Operations
We tend to put IT people in a back room, and they never see the customer. This methodology allows IT to see what they could have released and correct it if it's wrong, and we get an opportunity to train for the future.
Through my lens, it’s about protecting our data and making sure our customers are getting service that doesn't have vulnerabilities in it and is safe.

Gardner: Now, Ashish, user experience is top of mind for organizations, particularly organizations that are customer focused like ServiceMaster. When we look at security and DevOps coming together, we can put in place the requirements to maintain that data, but it also means we can get at more data and use it more strategically, more tactically, for personalization and customization -- and at the same time, making sure that those customers are protected.

How important is user experience and data gathering now when it comes to QA and making applications as robust as they can be?

Million-dollar question

Kuthiala: It's a million-dollar question. I'll give you an example of a client I work with. I happen to use their app very, very frequently, and I happen to know the team that owns that app. They told me about 12 months ago that they had invested -- let’s just make up this number -- $1 million in improving the user experience. They asked me how I liked it. I said, "Your app is good. I only use this 20 percent of the features in your app. I really don’t use the other 80 percent. It's not so useful to me."

That was an eye-opener to them, because the $1 million or so that they would have invested in enriching the user experience -- if they knew exactly what I was doing as a user, what I use, what I did not use, where I had problems -- could have used that toward that 20 percent that I use. They could have made it better than anybody else in the marketplace and also gathered information on what is it that the market wants by monitoring the user experience with people like me.
It's not just the availability and health of the application; it’s the user experience. It's having empathy for the user, as an end user.

It's not just the availability and health of the application; it’s the user experience. It's having empathy for the user, as an end-user. HPE of course, makes a lot of these tools, like HPE AppPulse, which is very specifically designed to capture that mobile user experience and bring it back before you have a flood of calls and support people screaming at you as to why the application isn’t working.

Security is also one of those things. All is good until something goes wrong. You don't want to be in a situation when something has actually gone wrong and your brand is being dragged through mud in the press, your revenue starts to decline, and then you look at it. It’s one of those things that you can't look at after the fact.

Gardner: Jennifer, this strikes me as an under-appreciated force multiplier, that the better you maintain data integrity, security, and privacy, the more trust you are going to get to get more data about your customers that you can then apply back to a better experience for them. Is that something that you are banking on at ServiceMaster?
Learn More About DevOps
Solutions that Unify
Development and Operations
Cole: Absolutely. Trust is important, not only with our customers, but also our employees and leaders. We want people to feel like they're in a healthy environment, where they can give us feedback on that user experience. What I would say to what Ashish was saying is that DevOps actually gives us the ability to deliver what the business wants IT to deliver for our customers.

In the past 25 years, IT has decided what the customer would like to see. In this methodology, you're actually working with your business partners who understand their products and their customers, and they're telling you the features that need to be delivered. Then, you're able to pick the minimum viable product and deliver it first, so that you can capture that 20 percent of functionality.

Also, if you're wrapping security in front of that, that means security is not coming back to you later with the penetration test results and say that you have all of these things to fix, which takes time away from delivering something new for our customers.

This methodology pays off, but the journey is hard. It’s tough because in most companies you have a legacy environment that you have to support. Then, you have this new application environment that you’re creating. There's a healthy balance that you have to find there, and it takes time. But we've seen quicker results and better revenue, our customers are happier, they're enjoying the ServiceMaster experience, instead of our individual brand families, and we've really embraced the methodology.

Gardner: Do you have any examples that you can recall where you've done development projects and you’ve been able to track that data around that particular application? What’s going on with the testing, and then how is that applied back to a DevOps benefit? Maybe you could just walk us through an example of where this has really worked well.

Digital first

Cole: About a year and a half ago, we started with one of our brands, American Home Shield, and looked at where the low hanging fruit -- or minimum viable product -- was in that brand for digital first. Let me describe the business a little bit. Our customers reach out to us, they purchase a policy for their house and we maintain appliances and such in their home, but it is a contractor-based company. We send out a contractor who is not a ServiceMaster associate.

We have to make that work and make our customer feel like they've had a seamless experience with American Home Shield. We had some opportunity in that brand for digital first. We went after it and drastically changed the way that our customers did business with us. Now, it's caught on like wildfire, and we're really trying to focus on one brand and one voice. This is a top-down decision which does help us move faster.

All seven of our brands are home services. We're in 75,000 homes a day and we needed to identify the customers of all the brands, so that we could customize the way that we do business with them. DevOps allows us to move faster into the market and deliver that.

Gardner: Ashish, there aren't that many security vendors that do DevOps, or DevOps vendors that do security. At HPE, how have you made advances in terms of how these two areas come together?
The strengths of HPE in helping its customers lies with the very fact that we have an end-to-end diverse portfolio.

Kuthiala: The strengths of HPE in helping its customers lies with the very fact that we have an end-to-end diverse portfolio. Jennifer talked about taking the security practices and not leaving it toward the end of the cycle, but moving it to the very beginning, which means that you have to get developers to start thinking like security experts and work with the security experts.

Given that we have a portfolio that spans the developers and the security teams, our best practices include building our own customer-facing software products that incorporate security practices, so that when developers are writing code, they can begin to see any immediate security threats as well as whether their code is compliant with any applicable policies or not. Even before code is checked in, the process runs the code through security checks and follows it all the way through the software development lifecycle.

These are security-focused feedback loops. At any point, if there is a problem, the changes are rejected and sent back or feedback is sent back to the developers immediately.

If it makes through the cycle and a known vulnerability is found before release to production, we have tools such as App Defender that can plug in to protect the code in production until developers can fix it, allowing you to go faster but remain protected.

Cole: It blocks it from the customer until you can fix it.

Kuthiala: Jennifer, can you describe a little bit how you use some of these products?

Strategic partnership

Cole: Sure. We’ve had a great strategic partnership with HPE in this particular space. Application security caught on fire about two years ago at RSA, which is one of the main security conferences for anyone in our profession.

The topic of application security has not been focused to CISOs in my opinion. I was fortunate enough that I had a great team member who came back and said that we have to get on board with this. We had some conversations with HPE and ended up in a great strategic partnership. They've really held our hands and helped us get through the process. In turn, that helped make them better, as well as make us better, and that's what a strategic partnership should be about.

Now, we're watching things as they are developed. So, we're teaching the developer in real-time. Then, if something happens to get through, we have App Defender, which will actually contain it until we can fix it before it releases to our customer. If all of those defenses don’t work, we still do the penetration test along with many other controls that are in place. We also try to go back to just grassroots, sit down with the developers, and help them understand why they would want to develop differently next time.
The next step for ServiceMaster specifically is making solid plans to migrate off of our legacy systems, so that we can truly focus on maturing DevOps and delivering for our customer in a safer, quicker way.

Someone from security is in every one of the development scrum meetings and on all the product teams. We also participate in Big Room Planning. We're trying to move out of that overall governing role and into a peer-to-peer type role, helping each other learn, and explaining to them why we want them to do things.

Gardner: It seems to me that, having gone at this at the methodological level with those collaboration issues solved, bringing people into the scrum who are security minded, puts you in a position to be able to scale this. I imagine that more and more applications are going to be of a mobile nature, where there's going to be continuous development. We're also going to start perhaps using micro-services for development and ultimately Internet of Things (IoT) if you start measuring more and more things in your homes with your contractors.

Cole: We reach 75,000 homes a day. So, you can imagine that all of those things are going to play a big part in our future.

Gardner: Before we sign-off, perhaps you have projections as to where you like to see things go. How can DevOps and security work better for you as a tag team?
Learn More About DevOps
Solutions that Unify
Development and Operations
Cole: For me, the next step for ServiceMaster specifically is making solid plans to migrate off of our legacy systems, so that we can truly focus on maturing DevOps and delivering for our customer in a safer, quicker way, and so we're not always having to balance this legacy environment and this new environment.
If we could accelerate that, I think we will deliver to the customer quicker and also more securely.

Gardner: Ashish, last word, what should people who are on the security side of the house be thinking about DevOps that they might not have appreciated?

Higher quality

Kuthiala: This whole approach of adopting DevOps is to deliver your software faster to your customers with higher quality says it. DevOps is an opportunity for security teams to get deeply embedded in the mindset of the developers, the business planners, testers, production teams – essentially the whole software development lifecycle, which earlier they didn’t have the opportunity to do.

They would usually come in before code went to production and often would push back the production cycles by a few weeks because they had to do the right thing and ensure release of code that was secure. Now, they’re able to collaborate with and educate developers, sit down with them, tell them exactly what they need to design and therefore deliver secure code right from the design stage. It’s the opportunity to make this a lot better and more secure for their customers.

Cole: The key is security being a strategic partner with the business and the rest of IT, instead of just being a governing body.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript o download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tuesday, October 18, 2016

Why government agencies could lead the way in demanding inter-public cloud interoperability and standardization

The next BriefingsDirect thought leadership panel discussion explores how public-sector organizations can gain economic benefits from cloud interoperability and standardization.

Our panel comes to you in conjunction with The Open Group Paris Event and Member Meeting October 24 through 27, 2016 in France, with a focus on the latest developments in eGovernment.

As government agencies move to the public cloud computing model, the use of more than one public cloud provider can offer economic benefits by a competition and choice. But are the public clouds standardized efficiently for true interoperability, and can the large government contracts in the offing for cloud providers have an impact on the level of maturity around standardization?

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how to best procure multiple cloud services as eGovernment services at low risk and high reward, we're joined by our panel, Dr. Chris Harding, Director for Interoperability at The Open Group; Dave Linthicum, Senior Vice President at Cloud Technology Partners, and Andras Szakal, Vice President and Chief Technology Officer at IBM U.S. Federal. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Andras, I've spoken to some people in the lead-up to this discussion about the level of government-sector adoption of cloud services, especially public cloud. They tell me that it’s lagging the private sector. Is that what you're encountering, that the public sector is lagging the private sector, or is it more complicated than that?

Szakal: It's a bit more complicated than that. The public sector born-on-the-cloud adoption is probably much greater than the public sector and it differentiates. So the industry at large, from a born-on-the-cloud point of view is very much ahead of the public-sector government implementation of born-on-the-cloud applications.

What really drove that was innovations like the Internet of Things (IoT), gaming systems, and platforms, whereas the government environment really was more about taking existing government citizens to government shared services and so on and so forth and putting them into the cloud environment.

When you're talking about public cloud, you have to be very specific about the public sector and government, because most governments have their own industry instance of their cloud. In the federal government space, they're acutely aware of the FedRAMP certified public-cloud environments. That can go from moderate risk, where you can have access to the yummy goodness of the entire cloud industry, but then, to FedRAMP High, which would isolate these clouds into their own environments in order to increase the level of protection and lower the risk to the government.

So, the cloud service provider (CSP) created instances of these commercial clouds fit-for-purpose for the federal government. In that case, if we're talking about enterprise applications shifting to the cloud, we're seeing the public sector government side, at the national level, move very rapidly, compared to some of the commercial enterprises who are more leery about what the implications of that movement may be over a period of time. There isn't anybody that's mandating that they do that by law, whereas that is the case on the government side.

Attracting contracts

Gardner: Dave, it seems that if I were a public cloud provider, I couldn't think of a better customer, a better account in terms of size and longevity, than some major government agencies. What are we seeing from the cloud providers in trying to attract the government contracts and perhaps provide the level of interoperability and standardization that they require?

Linthicum: The big three -- Amazon, Google and Microsoft -- are really making an effort to get into that market. They all have federal sides to their house. People are selling into that space right now, and I think that they're seeing some progress. The FAA and certainly the DoD have been moving in that direction.

However, they do realize that they have to build a net new infrastructure, a net new way of doing procurement to get into that space. In the case where the US is building the world’s biggest private cloud at the CIA, they've had to change their technology around the needs of the government.

They see it as really the "Fortune 1." They see it as the largest opportunity that’s there, and they're willing to make huge investments in the billions of dollars to capture that market when it arrives.

Gardner: It seems to me, Chris, that we might be facing a situation where we have cloud providers offering a set of services to large government organizations, but perhaps a different set to the private sector. From an interoperability and standardization perspective, that doesn’t make much sense to me.

What’s your perspective on how public cloud services and standardization are shaping up? Where did you expect things to be at this point?

Harding: The government has an additional dimension to that of the private sector when it comes to procurement in terms of the need to be transparent and to be spending the money that’s entrusted to them by the public in a wise manner. One of the issues they have with a lack of standardization is that it makes it more difficult for them to show that they're visibly getting the best deals from the taxpayers when they come to procure cloud services.

In fact, The Open Group produced a guide to cloud computing for business a couple of years ago. One of the things that we argued in that was that, when procuring cloud services, the enterprise should model the use that it intends to make of the cloud services and therefore be able to understand the costs that they were likely to incur. This is perhaps more important for government, even more than it is for private enterprises. And you're right, the lack of standardization makes it more difficult for them to do this.

Gardner: Chris, do you think that interoperability is of a higher order of demand in public-sector cloud acquisition than in the private sector, or should there be any differentiation?

Need for interoperability

Harding: Both really have the need for interoperability. The public sector perhaps has a greater need, simply because it’s bigger than a small enterprise and it’s therefore more likely to want to use more cloud services in combination.

Gardner: We've certainly seen a lot of open-source platforms emerge in private cloud as well as hybrid cloud. Is that a driving force yet in the way that the public sector is looking at public cloud services acquisition? Is open source a guide to what we should expect in terms of interoperability and standardization in public-cloud services for eGovernment?

Szakal: Open source, from an application implementation point of view, is one of the questions you're asking, but are you also suggesting that somehow these cloud platforms will be reconsidered or implemented via open source? There's truth to both of those statements.

IBM is the number two cloud provider in the federal government space, if you look at hybrid and the commercial cloud for which we provide three major cloud environments. All of those cloud implementations are based on open source -- OpenStack and Cloud Foundry are key pieces of this -- as well as the entire DevOps lifecycle.
So, the economy of APIs and the creation of this composite services are going to be very, very important elements. If they're closed and not open to following the normal RESTful approaches defined by the W3C and other industry consortia, then it’s going to be difficult to create these composite clouds.

So, open source is important, but if you think of open source as a way to ensure interoperability, kind of what we call in The Open Group environment "Executable Standards," it is a way to ensure interoperability.

That’s more important at the cloud-stack level than it is between cloud providers, because between cloud providers you're really going to be talking about API-driven interoperability, and we have that down pretty well.

So, the economy of APIs and the creation of this composite services are going to be very, very important elements. If they're closed and not open to following the normal RESTful approaches defined by the W3C and other industry consortia, then it’s going to be difficult to create these composite clouds.

Gardner: We saw that OpenStack had its origins in a government agency, NASA. In that case, clearly a government organization, at least in the United States, was driving the desire for interoperability and standardization, a common platform approach. Has that been successful, Dave? Why wouldn’t the government continue to try to take that approach of a common, open-source platform for cloud interoperability?

Linthicum: OpenStack has had some fair success, but I wouldn’t call it excellent success. One of the issues is that the government left it dangling out there, and while using some aspects of it, I really expected them to make some more adoption around that open standard, for lots of reasons.

So, they have to hack the operating systems and meet very specific needs around security, governance, compliance, and things like that. They have special use cases, such as the DoD, weapons control systems in real time, and some IoT stuff that the government would like to move into. So, that’s out there as an opportunity.
Register for
The Open Group Event
Next in Your Region
In other words, the ability to work with some of the distros out there, and there are dozens of them, and get into a special government version of that operating system, which is supported openly by the government integrators and providers, is something they really should take advantage of. It hasn’t happened so far and it’s a bit disappointing.

Insight into Europe

Gardner: Do any of you have any insight into Europe and some of the government agencies there? They haven’t been shy in the past about mandating certain practices when it comes to public contracts for acquisition of IT services. I think cloud should follow the same path. Is there a big difference in what’s going on in Europe and in North America?

Szakal: I just got off the phone a few minutes ago with my counterpart in the UK. The nice thing about the way the UK government is approaching cloud computing is that they're trying to do so by taking the handcuffs off the vendors and making sure that they are standards-based. They're meeting a certain quality of services for them, but they're not mandating through policy and by law the structure of their cloud. So, it allows for us, at least within IBM, to take advantage of this incredible industry ecosystem you have on the commercial side, without having to consider that you might have to lift and shift all of this very expensive infrastructure over to these industry clouds.

The EU is, in similar ways, following a similar practice. Obviously, data sovereignty is really an important element for most governments. So, you see a lot of focus on data sovereignty and data portability, more so than we do around strict requirements in following a particular set of security controls or standards that would lock you in and make it more difficult for you to evolve over a period of time.

Gardner: Chris Harding, to Andras’ point about data interoperability, do you see that as a point on the arrow that perhaps other cloud interoperability standards would follow? Is that something that you're focused on more specifically than more general cloud infrastructure services?

Harding: Cloud is a huge spectrum, from the infrastructure services at the bottom,up to the business services, the application services, to software as a service (SaaS), and data interoperability sits on top of that stack.

I'm not sure that we're ready to get real data interoperability yet, but the work that's being done on trying to establish common frameworks for understanding data, for interpreting data, is very important as a basis for gaining interoperability at that level in the future.

We also need to bear in mind that the nature of data is changing. It’s no longer a case that all data comes from a SQL database. There are all sorts of ways in which data is represented, including human forms, such as text and speech, and interpreting those is becoming more possible and more important.

This is the exciting area, where you see the most interesting work on interoperability.

Gardner: Dave Linthicum, one of the things that some of us who have been proponents of cloud for a number of years now have looked to is the opportunity to get something that couldn’t have been done before, a whole greater than the sum of the parts.
Register for
The Open Group Event
Next in Your Region
It seems to me that if you have a common cloud fabric and the sufficient amount of interoperability for data and/or applications and infrastructure services and that cuts across both the public and the private sector, then this difficulty we've had with health insurance, payer and provider, interoperability and communication, sharing of government services, and data with the private sector, many of the things that have been probably blamed on bureaucracy and technical backwardness in some ways could be solved if there was a common public cloud approach adopted by the major public cloud providers. It seems to me a very significant benefit could be drawn when the public and private sector have a commonality that having your own data centers of the past just couldn't provide.

Am I chewing on too much pie in the sky here, Dave, or is there actually something to be said about the cloud model, not just between government to government agencies, but the public and private sectors?

Getting more savvy

Linthicum: The public-cloud providers out there, the big ones, are getting more savvy about providing interoperability, because they realized that it’s going to be multi-cloud. It’s going to be different private and public cloud instances, different kinds of technologies, that are there, and you have to work and play well with a number of different technologies.

However, to be a little bit more skeptical, over the years, I've found out that they're in it for their own selfish interests, and they should be, because they're corporations. They're going to basically try to play up their technology to get into a market and hold on to the market, and by doing that, they typically operate against interoperability. They want to make it as difficult as possible to integrate with the competitors and leverage their competitors’ services.

So, we have that kind of dynamic going on, and it’s incredibly frustrating, because we can certainly stand up, have the discussion, and reveal the concepts. You just did a really good job in revealing that this has been Nirvana, and we should start moving in this direction. You will typically get lots of head-nodding from the public-cloud providers and the private-cloud providers but actions speak louder than words, and thus far, it’s been very counterproductive.

Interoperability is occurring but it’s in dribs and drabs and nothing holistic.

Gardner: Chris, it seems as if the earlier you try to instill interoperability and standardization both in technical terms, as well as methodological, that you're able to carry that into the future where we don't repave cow paths, but we have highly non-interoperable data centers replaced by them being in the cloud, rather than in some building that you control.
The public-cloud providers out there, the big ones, are getting more savvy about providing interoperability, because they realized that it’s going to be multi-cloud.

What do you think is going to be part of the discussion at The Open Group Paris Event, October 24, around some of these concepts of eGovernment? Shouldn’t they be talking about trying to make interoperability something that's in place from the start, rather than something that has to be imposed later in the process?

Harding: Certainly this will be an important topic at the forthcoming Paris event. My personal view is that the question of when you should standardize something to gain interoperability is a very difficult balancing act. If you do it too late, then you just get a mess of things that don’t interoperate, but equally, if you try to introduce standards before the market is ready for them, you generally end up with something that doesn’t work, and you get a mess for a different reason.

Part of the value of industry events, such as The Open Group events, is for people in different roles in different organizations to be able to discuss with each other and get a feel for the state of maturity and the directions in which it's possible to create a standard that will stick. We're seeing a standard paradigm, the API paradigm, that was mentioned earlier. We need to start building more specific standards on top of those, and certainly in Paris and at future Open Group events, those are the things we'll be discussing.

Gardner: Andras, you wear a couple of different hats. One is the Chief Technology Officer at IBM US Federal, but you're also very much involved with The Open Group. I think you're on the Board of Directors. How do you see this progression of what The Open Group has been able to do in other spheres around standardization and both methodological, such as an enterprise architecture framework, TOGAF®, an Open Group standard,, as well as the implementation enforcement of standards? Is what The Open Group has done in the past something you expect to be applicable to these cloud issues?

Szakal: IBM has a unique history, being one of the only companies in the technology arena. It’s over a 100-years-old and has been able to retain great value to its customers over that long period of time, and we shifted from a fairly closed computing environment to this idea of open interoperability and freedom of choice.

That's our approach for our cloud environment as well. What drives us in this direction is because our customers require it from IBM, and we're a common infrastructure and a glue that binds together many of our enterprise and the largest financial banking and healthcare institutions in the world to ensure that they can interoperate with other vendors.
Register for
The Open Group Event
Next in Your Region
As such, we were one of the founders of The Open Group, which has been at the forefront of helping facilitate this discussion about open interoperability. I'm totally with Chris as to when you would approach that. As I said before, my concern is that you interoperate at the service level in the economy of APIs. That would suggest that there are some other elements for that, not just the API itself, but the ability to effectively manage credentials, security, or some other common services, like being able to manage object stores to the place that you would like to be able to store your information, so that data sovereignty isn’t an issue. These are all the things that will occur over a period of time.

Early days

It’s early, heady days in the cloud world, and we're going to see all of that goodness come to pass here as we go forward. In reality, we talk about cloud it as if it’s a thing. It’s true value isn't so much in the technology, but in creating these new disruptive business capabilities and business models. Openness of the cloud doesn’t facilitate that creation of those new business models.

That’s where we need to focus. Are we able to actually drive these new collaborative models with our cloud capabilities? You're going to be interoperating with many CSPs not just two, three, or four, especially as you see different factors grow into the cloud. It won’t matter where they operate their cloud services from; it will matter how they actually interoperate at that API level.

Gardner: It certainly seems to me that the interoperability is the killer application of the cloud. It can really foster greater inter-department collaboration and synergy, government to government, state to federal, across the EU, for example as well, and then also to the private sector, where you have healthcare concerns and you've got monetary and banking and finance concerns all very deeply entrenched in both public and private sectors. So, we hope that that’s where the openness leads to.
It won’t matter where they operate their cloud services from; it will matter how they actually interoperate at that API level.

Chris, before we wrap up, it seems to me that there's a precedent that has been set successfully with The Open Group, when it comes to security. We've been able to do some pretty good work over the past several years with cloud security using the adoption of standards around encryption or tokenization, for example. Doesn’t that sort of give us a path to greater interoperability at other levels of cloud services? Is security a harbinger of things to come?

Harding: Security certainly is a key aspect that needs to be incorporated in the standards where we build on the API paradigm. But, some people talk about move to digital transformation, the digital enterprise. So, cloud and other things like IoT, big-data analysis, and so on are all coming together, and a key underpinning requirement for that is platform integration. That's where the Open Platform 3.0™ Forum of The Open Group is centering on the possibilities for platform interoperability to enable digital platform integration. Security is a key aspect of that, but there are other aspects too.

Gardner: I am afraid we will have to leave it there. We've been discussing the latest developments in eGovernment and cloud adoption with a panel of experts. Our focus on these issues comes in conjunction with The Open Group Paris Event and Member Meeting, October 24-27, 2016 in Paris, France, and there is still time to register at and find more information on that event, and many others coming in the near future.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Tuesday, October 11, 2016

How propelling instant results to the Excel edge democratizes advanced analytics

The next BriefingsDirect Voice of the Customer digital transformation case study explores how powerful and diverse financial information is newly and uniquely delivered to the ubiquitous Excel spreadsheet edge.

We'll explore how HTI Labs in London provides the means and governance with its Schematiq tool to bring critical data services to the interface users want. By leveraging the best of instant cloud-delivered data with spreadsheets, Schematiq democratizes end-user empowerment while providing powerful new ways to harness and access complex information.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how complex cloud core-to-edge processes and benefits can be better managed and exploited we're joined by Darren Harris, CEO and Co-Founder of HTI Labs, and Jonathan Glass, CTO and Co-Founder of HTI Labs, based in London. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let's put some context around this first. What major trends in the financial sector led you to create HTI Labs, and what are the problems you're seeking to solve?

Harris: Obviously, in finance, spreadsheets are widespread and are being used for a number of varying problems. A real issue started a number of years ago, where spreadsheets got out of control. People were using them everywhere, causing lots of operational risk processes. They wanted to get their hands around it for governance, and there were loads that we needed to eradicate -- Excel-type issues.

That led to the creation of centralized teams that locked down rigid processes and effectively took away a lot of the innovation and discovery process that traders are using to spot opportunities and explore data.

Through this process, we're trying to help with governance to understand the tools to explore, and [deliver] the ability to put the data in the hands of people ... [with] the right balance.

So by taking the best of regulatory scrutiny around what a person needs, and some innovation that we put into Schematiq, we see an opportunity to take Excel to another level -- but not sacrifice the control that’s needed.

Gardner: Jonathan, are there technology trends that allowed you to be able to do this, whereas it may not have been feasible economically or technically before?

Upstream capabilities

Glass: There are lot of really great back-end technologies that are available now, along with the ability to either internally or externally scale compute resources. Essentially, the desktop remains quite similar. Excel has remained quite the same, but the upstream capabilities have really grown.

So there's a challenge. Data that people feel they should have access to is getting bigger, more complex, and less structured. So Excel, which is this great front-end to come to grips with data, is becoming a bit of bottleneck in terms of actually keeping up with the data that's out there that people want.

Gardner: So, we're going to keep Excel. We're not going to throw the baby out with the bathwater, so to speak, but we are going to do something a little bit different and interesting. What is it that we're now putting into Excel and how is that different from what was available in the past?

Harris: Schematiq extends Excel and allows it to access unstructured data. It also reduces the complexity and technical limitations that Excel has as an out-of-the-box product.

We have the notion of a data link that's effectively in a single cell that allows you to reference data that’s held externally on a back-end site. So, where people used to ingest data from another system directly into Excel, and effectively divorce it from the source, we can leave that data where it is.
Learn More About
Haven OnDemand
Sign Up Now
It's a paradigm of take a question to the data; don’t pull the data to the question. That means we can leverage the power of the big-data platforms and how they process an analytic database on the back-end, but where you can effectively use Excel as the front screen. Ask questions from Excel, but push that query to the back-end. That's very different in terms of the model that most people are used to working with in Excel.

Gardner: This is a two-way street. It's a bit different. And you're also looking at the quality, compliance, and regulatory concerns over that data.

Harris: Absolutely. An end-user is able to break down or decompose any workflow process with data and debug it the same way they can in a spreadsheet. The transparency that we add on top of Excel’s use with Schematiq allows us to monitor what everybody is doing and the function they're using. So, you can give them agility, but still maintain the governance and the control.

In organizations, lots of teams have become disengaged. IT has tried to create some central core platform that’s quite restrictive, and it's not really serving the users. They have gotten disengaged and they've created what Gartner referred to as the Shadow BI Team, with databases under their desk, and stuff like that.

By bringing in Schematiq we add that transparency back, and we allow IT and the users to have an informed discussion -- a very analytic conversation -- around what they're using, how they are using it, where the bottlenecks are. And then, they can work out where the best value is. It's all about agility and control. You just can't give the self-service tools to an organization and not have the transparency for any oversight or governance.

To the edge

Gardner: So we have, in a sense, brought this core to the edge. We've managed it in terms of compliance and security. Now, we can start to think about how creative we can get with what's on that back-end that we deliver. Tell us a little bit about what you go after, what your users want to experiment with, and then how you enable that.

Glass: We try to be as agnostic to that as we can, because it's the creativity of the end-user that really drives value.

We have a variety of different data sources, traditional relational databases, object stores, OLAP cubes, APIs, web queries, and flat files. People want to bring that stuff together. They want some way that they can pull this stuff in from different sources and create something that's unique. This concept of putting together data that hasn't been put together before is where the sparks start to fly and where the value really comes from.

Gardner: And with Schematiq you're enabling that aggregation and cleansing ability to combine, as well as delivering it. Is that right?
The iteration curve is so much tighter and the cost of doing that is so much less. Users are able to innovate and put together the scenario of the business case for why this is a good idea.

Harris: Absolutely. It's that discovery process. It may be very early on in a long chain. This thing may progress to be something more classic, operational, and structured business intelligence (BI), but allowing end-users the ability to cleanse, explore data, and then hand over an artifact that someone in the core team can work with or use as an asset. The iteration curve is so much tighter and the cost of doing that is so much less. Users are able to innovate and put together the scenario of the business case for why this is a good idea.

The only thing I would add to the sources that Jon has just mentioned is with HPE Haven OnDemand, [you gain access to] the unstructured analytics, giving the users the ability to access and leverage all of the HPE IDOL capabilities. That capability is a really powerful and transformational thing for businesses.

They have such a set of unstructured data [services] available in voice and text, and when you allow business users access to that data, the things they come up with, their ideas, are just quite amazing.

Technologists always try to put themselves in the minds of the users, and we've all historically done a bad job of making the data more accessible for them. When you allow them the ability to analyze PDFs without structure, to share that, to analyze sentiment, to include concepts and entities, or even enrich a core proposition, you're really starting to create innovation. You've raised the awareness of all of these analytics that exist in the world today in the back-end, shown end-users what they can do, and then put their brains to work discovering and inventing.

Gardner: Many of these financial organizations are well-established, many of them for hundreds of years perhaps. All are thinking about digital transformation, the journey, and are looking to become more data-driven and to empower more people to take advantage of that. So, it seems to me you're almost an agent of digital transformation, even in a very technical and sophisticated sector like finance.

Making data accessible

Glass: There are a lot of stereotypes in terms of who the business analysts are and who the people are that come up with ideas and intervention. The true power of democratization is making data more accessible, lowering the technical barrier, and allowing people to explore and innovate. Things always come from where you least expect them.

Gardner: I imagine that Microsoft is pleased with this, because there are some people who are a bit down on Excel. They think that it's manual, that it's by rote, and that it's not the way to go. So, you, in a sense, are helping Excel get a new lease on life.

Glass: I don’t think we're the whole story in that space, but I love Excel. I've used it for years and years at work. I've seen the power of what it can do and what it can deliver, and I have a bit of an understanding of why that is. It’s the live nature of it, the fact that people can look at data in a spreadsheet, see where it’s come from, see where it’s going, they can trust it, and they can believe in it.
Learn More About
Haven OnDemand
Sign Up Now
That’s why what we're trying to do is create these live connections to these upstream data sources. There are manual steps, download, copy/paste, move around the sheet, which is where errors creep in. It’s where the bloat, the slowness, and the unreliability can happen, but by changing that into a live connection to the data source, it becomes instant and it goes back to being trusted, reliable, and actionable.

Harris: There's something in the DNA, as well, of how people interact with data and so we can lay out effectively the algorithm or the process of understanding a calculation or a data flow. That’s why you see a lot of other systems that are more web-based or web-centric and replicate an Excel-type experience.

The user starts to use it and starts to think, "Wow, it’s just like Excel," and it isn’t. They hit a barrier, they hit a wall, and then they hit the "export" button. Then, they put it back [into Excel] and create their own way to work with it. So, there's something in the DNA of Excel and the way people lay things out. I think of [Excel] almost like a programing environment for non-programers. Some people describe it as a functional language very much like Haskell, and the Excel functions they write were effectively then working and navigating through the data.

Gardner: No need to worry that if you build it, will they come; they're already there.

Harris: Absolutely.

Gardner: Tell us a bit about HTI Labs and how your company came about, and where you are on your evolution.

Cutting edge

Harris: HTI labs was founded in 2012. The core backbone of the team actually worked for the same tier 1 investment bank, and we were building risk and trading systems for front-office teams. We were really, I suppose, the cutting edge of all the big data technologies that were being used at the time -- real-time, disputed graphs and cubes, and everything.

As a core team, it was about taking that expertise and bringing it to other industries. Using Monte Carlo farms in risk calculations, the ability to export data at speed and real-time risk. These things were becoming more centric to other organizations, which was an opportunity.

At the moment, we're focusing predominately on energy trading. Our software is being used across a number of other sectors and our largest client has installed Schematiq on 120 desktops, which is great. That’s a great validation of what we're doing. We're also a member of the London Stock Exchange Elite Program, based in London for high-growth companies.

Glass: Darren and I met when we were working for the same company. I started out as a quant doing the modeling, the map behind pricing, but I found that my interest lay more in the engineering. Rather than doing it once, can I do it a million times, can I do these things reliably and scale them?
The algorithms are built, but the key to making them so much more improved is the feedback loop between your domain users, your business users, and how they can enrich and train effectively these algorithms.

Because I started in a front-office environment, it was very spreadsheet-dominated, it was very VBA-dominated. There's good and bad in that. A lot of those lessened, and Darren and I met up. We crossed the divide together from the top-down, big IT systems and the bottom-up end-user best-developed spreadsheets, and so on. We found a middle ground together, which we feel is a quite powerful combination.

Gardner: Back to where this leads. We're seeing more-and-more companies using data services like Haven OnDemand and starting to employ machine learning, artificial intelligence (AI), and bots to augment what the humans do so well. Is there an opportunity for that to play here, or maybe it already is? The question basically is, how does AI come to bear on what you can deliver out to the Excel edge?

Harris: I think what you see is that out of the box, you have a base unit of capability. The algorithms are built, but the key to making them so much more improved is the feedback loop between your domain users, your business users, and how they can enrich and train effectively these algorithms.

So, we see a future where the self-service BI tools that they use to interact with data and explore would almost become the same mechanism where people will see the results from the algorithms and give feedback to send back to the underlying algorithm.

Gardner: And Jonathan, where do you see the use of bots, particularly perhaps with an API model like Haven OnDemand?

The role of bots

Glass: The concept for bots is replicating an insight or a process that somebody might already be doing manually. When people create these data flows and analyses that they maybe run once so it’s quite time-consuming to run. The real exciting possibility is that you make these things run 24×7. So, you start receiving notifications, rather than having to pull from the data source. You start receiving notifications from your own mailbox that you have created. You look at those and you decide whether that's a good insight or a bad insight, and you can then start to train it and refine it.

The training and refining is that loop that potentially goes back to IT, gets back through a development loop, and it’s about closing that loop and tightening that loop. That's the thing that really adds value to those opportunities.

Gardner: Perhaps we should unpack Schematiq a bit to understand how one might go back and do that within the context of your tool. Are there several components of the tool, one of which might lend itself to going back and automating?

Glass: Absolutely. You can imagine the spreadsheet has some inputs and some outputs. One of the components within the Schematiq architecture is the ability to take a spreadsheet, to take the logic and the process that’s embedded in our spreadsheet, and turn it into an executable module of code, which you can host on your server, you can schedule, you can run as often as you like, and you can trigger based on events.
It’s very much all about empowering the end-user to connect, create, govern, share instantly and then allow consumption from anybody on any device.

It’s a way of emitting code from a spreadsheet. You take some of the insight, you take without a business analysis loop and a development loop, and you take the exact thing that the user, the analyst, has programmed. You make it into something that you can run, commoditize, and scale. That’s quite an important way in which we reduce that development loop. We create that cycle that’s tight and rapid.

Gardner: Darren, would you like to explain the other components that make-up Schematiq?

Harris: There are four components of Schematiq architecture. There's the workbench that extends Excel and allows the ability to have large structured data analytics. We have the asset manager, which is really all about governance. So, you can think of it like source control for Excel, but with a lot more around metadata control, transparency, and analytics on what people are using and how they are using it.

There's a server component that allows you just to off-load and scale analytics horizontally, if they do that, and build repeatable or overnight processes. The last part is the portal. This is really about allowing end-users to instantly share their insights with other people. Picking up from Jon’s point about the compound executable, but it’s defined in Schematiq. That can be off-loaded to a server and exposed as another API to a computer, the mobile, or even a function.

So, it’s very much all about empowering the end-user to connect, create, govern, share instantly and then allow consumption from anybody on any device.

Market for data services

Gardner: I imagine, given the sensitive nature of the financial markets and activities, that you have some boundaries that you can’t cross when it comes to examining what’s going on in between the core and the edge.

Tell me about how you, as an organization, can look at what’s going on with the Schematiq and the democratization, and whether that creates another market for data services when you see what the demand entails.

Harris: It’s definitely the case that people have internal datasets they create and that they look after. People are very precious about them because they are hugely valuable, and one of the things that we strive to help people do is to share those things.

Across the trading floor, you might effectively have a dozen or more different IT infrastructures, if you think of what’s existing on the desk as being a miniature infrastructure that’s been created. So, it's about making easy for people to share these things, to create master datasets that they gain value from, and to see that they gain mutual value from that, rather than feeling closed in, and don’t want to share this with their neighbors.

If we work together and if we have the tools that enable us to collaborate effectively, then we can all get more done and we can all add more value.
If we work together and if we have the tools that enable us to collaborate effectively, then we can all get more done and we can all add more value.

Gardner: It's interesting to me that the more we look at the use of data, the more it opens up new markets and innovation capabilities that we hadn’t even considered before. And, as an analyst, I expect to see more of a marketplace of data services. You strike me as an accelerant to that.

Harris: Absolutely. As the analytics are coming online and exposed by API’s, the underlying store that’s used is becoming a bit irrelevant. If you look at what the analytics can do for you, that’s how you consume the insight and you can connect to other sources. You can connect from Twitter, you connect from Facebook, you can connect PDFs, whether it’s NoSQL, structured, columnar, rows, it doesn’t really matter. You don’t see that complexity. The fact that you can just create an API key, access it as consumer, and can start to work with it is really powerful.

There was the recent example in the UK of a report on the Iraq War. It’s 2.2 million words, it took seven years to write, and it’s available online, but there's no way any normal person could consume or analyze that. That’s three times the complete works of Shakespeare.

Using these APIs, you can start to pull out mentions, you can pull out countries, locations and really start to get into the data and provide anybody with Excel at home, in our case, or any other tool, the ability to analyze and get in there and share those insights. We're very used to media where we get just the headline, and that spin comes into play. People turn things on their, head and you really never get to delve into the underlying detail.
Learn More About
Haven OnDemand
Sign Up Now
What’s really interesting is when democratization and sharing of insights and collaboration comes, we can all be informed. We can all really dig deep, and all these people that work there, the great analysts, could start to collaborate and delve and find things and find new discoveries and share that insight.

Gardner: All right, a little light bulb just went off in my head whereas we would go to a headline and a new story and we might have a hyperlink to a source. I could get a headline and a news story, open up my Excel spreadsheet, get to the actual data source behind the entire story and then probe and plumb and analyze that any which way I wanted to.

Harris: Yes, Exactly. I think the most savvy consumer now, the analyst, is starting to demand that transparency. We've seen in the UK, words, election messages and quotes and even financial stats where people just don’t believe the headlines. They're demanding transparency in that process, and so governance can only be really a good thing.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: