Tuesday, December 8, 2015

Need for fast analytics in healthcare spurs Sogeti converged BI solutions partnership model

The next BriefingsDirect big-data solution discussion explores how a triumvirate of big-data players is delivering a rapid and efficient analysis capability across disparate data types for the healthcare industry.

We'll learn how the drive for better patient outcomes amid economic efficiency imperatives has created a demand for a new type of big-data implementation model. This solutions approach -- with the support from Hewlett Packard Enterprise, Microsoft, and Sogeti -- leverages a nimble big-data platform, converged solutions, hybrid cloud, and deep vertical industry expertise.

The result is innovative and game-changing insights across healthcare ecosystems of providers, patients, and payers. The ramp-up to these novel and accessible insights is rapid, and the cost-per-analysis value is very impressive.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript o download a copy.

Here to share the story on how the Data-Driven Decisions for Healthcare initiative arose and why it portends more similar vertical industry focused solutions, we're joined by Bob LeRoy, Vice President in the Global Microsoft Practice and Manager of the HPE Alliance at Sogeti USA. He's based in Cincinnati. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why the drive for a new model for big data analytics in healthcare? What are some of the drivers, some of the trends, that have made this necessary now?

LeRoy: Everybody is probably very familiar with the Affordable Care Act (ACA), also known as ObamaCare. They've put a lot of changes in place for the healthcare industry, and primarily it's around cost containment. Beyond that, the industry itself understands that they need to improve the quality of care that they're delivering to patients. That's around outcomes, how can we affect the care and the wellness of individuals.

LeRoy
So it’s around cost and the quality of the care, but it’s also about how the industry itself is changing, both from how providers are now doing more with payments and how classic payers are doing more to actually provide care themselves. There is this blur between the lines of payer and provider.

Some of these people are actually becoming what we call accountable care organizations (ACOs). We see a new one of these ACOs come up each week, where they are both payer and provider.

Gardner: Not only do we have a dynamic economic landscape, but the ability to identify what works and what doesn't work can really be important, especially when dealing with multiple players and multiple data types. This is really not just knowing your own data; this is knowing data across organizational boundaries.

LeRoy:  Exactly. And there are a lot of different data models that exists. When you look at things like big data and the volume of data that exist out in the field, you can put that data to use to understand who are your critical patients, and how that can affect your operations?

Gardner:  Why do we look to a triangulated solution between players like Hewlett Packard Enterprise, Microsoft, and Sogeti? What is it about the problem that you're trying to solve that has led it to a partnership type of solution?

Long-term partner

LeRoy: Sogeti, a wholly-owned subsidiary of the Capgemini Group, has been a long-term partner with Microsoft. The tools that Microsoft provides are one of the strengths of Sogeti. We've been working with HPE now for almost two years, and it's a great triangulation between the three companies. Microsoft provides the software, HPE provides the hardware, and Sogeti provides the services to deliver innovative solutions to customers and do it in a rapid way. What you're getting is best in class in all three of those categories -- the software, the hardware, and the services.

Gardner: There's another angle to this, too, and it’s about the cloud delivery model. How does that factor into this? When we talked about hardware, it sounds like there's an on-premises aspect to it, but how does the cloud play a role?

LeRoy: Everybody wants to hear about the cloud, and certainly it’s important in this space, too, because of the type of data that we're collecting. You could consider social data or data from third party software-as-a-service (SaaS) applications, and that data can exist everywhere.
Converged Systems Help Transform
Healthcare and Financial Services
Learn More from Sogetilabs
You have your on-premise data and you have your off-premise data. The tool that we're using, in this case from HPE and Microsoft, really lend themselves well to developing a converged environment to deliver best in class across those different environments. They're secured, delivered quickly, and they provide the information and the insights that their hospitals and insurance companies really need.

Gardner: So we have a converged solution set from HPE. We have various clouds that we can leverage. We have great software from Microsoft. Tell us a little about Sogeti and what you're bringing to the table. What is it that you've been doing in healthcare that helps solidify this solution and the rapid analysis requirements?
Sogeti’s strength is that we're really focused on the technology and the implementations of technology.

LeRoy: This is one of the things that Sogeti brings into the table. Sogeti is part of the Capgemini Group, a global organization with 150,000 employees, and Sogeti is one of the five strategic business units of the group. Sogeti’s strength is that we're really focused on the technology and the implementations of technology and we are focused on several different verticals, healthcare being one of them.

We have experts on the technology stacks, but we also have experts in healthcare itself. We have people who we've pulled from the healthcare industry. We taught them what we do in the IT world, so they can help us focus best practices and technologies to solve real healthcare organizational problems, so that we can get toward the quality of care and the cost reduction that the ACA is really looking for. That’s a real strength that's going to add significant values to healthcare organizations.

Gardner: It’s very important to see that one size does not fit all when it comes to the systems. Having industry verticalization is required, and you're embarking on a retail equivalent to this model, and manufacturing in other sectors might come along as well.

Let's look at why this approach to this problem is so innovative. What have been some of the problems that have held back the ability of large and even mid-sized organizations in the healthcare vertical industry from getting these insights? What are some of the hurdles that they've had to overcome and that perhaps beg for a new and different model and a new approach?

Complexity of data

LeRoy: There are a couple of factors. For sure, it’s the complexity of the data itself. The data is distributed over a wide variety of systems. So it’s hard to get a full picture of a patient or a certain care program, because the systems are spread out all over the place. When the systems in so many different ways end up with you, you get part of the data. You don’t get the full picture. We call that poor data quality, and that y makes it hard for somebody who's doing analysis to really understand and gain insight from their data.

Of course, there's also the existing structure that’s in place within organizations. They've been around for a long time. People are sometimes resistant to change. Take all of those things together and you end up with a slow response time to delivering the data that they're looking for.

Access to the data becomes very complex or difficult for an end-user or a business analyst. The cost of changing those structures can be pretty expensive. If you look at all those things together, it really slows down an organization’s ability to understand the data that they've got to gain insights about their business.

Gardner: Just a few years ago, when we used to refer to data warehouses, it was a fairly large undertaking. It would take months to put these into place, required a data center or some sort of a leasing arrangement, and of course a significant amount of upfront costs. How has this new model approached those costs and length of time or ramp-up time issues?
HPE is providing a box that’s going to allow me to put both into a single environment. So that’s going to reduce my cost a lot.

LeRoy: Microsoft’s model that they have put in place to support their Analytics Platform System (APS) allows them to license their tools at a lower price. The other thing that's really made a difference is the way HPE has put together their ConvergedSystem that allows us to tie these hybrid environments together to aggregate the data in a very simple solution that provides a lot of performance.

If I have to look at unstructured data and structured data, I often need two different systems. HPE is providing a box that’s going to allow me to put both into a single environment. So that’s going to reduce my cost a lot.

They have also delivered it as an appliance, so I don't need to spend a lot of time buying, provisioning, or configuring servers, setting up software, and all those things, I can just order this ConvergedSystem from HPE, put it in my data center, and I am almost ready to go. That’s the second thing that really helps save a lot of time.
Converged Systems Help Transform
Healthcare and Financial Services
Learn More from Sogetilabs
The third one is that at Sogeti Services, we have some intellectual property (IP) to help the data integration from these different systems and the aggregation of the data. We've put together some software and some accelerators to help make that integration go faster.

The last piece of that is a data model that structures all this data into a single view that makes it easier for the business people to analyze and understand what they have. Usually, it would take you literally years to come up with these data models. Sogeti has put all the time into it, created these models, and made it something that we can deliver to a customer much faster, because we've already done it. All we have to do is install it in your environment.

It's those three things together -- the software pricing from Microsoft, the appliance model from HP, and the IP and the accelerators that Sogeti has.

Consumer's view

Gardner: Bob, let’s look at this now through the lens of that consumer, the user. It wasn’t that long ago where most of the people doing analytics were perhaps wearing white lab coats, very accomplished in their particular query languages and technologies. But part of the thinking now for big data is to get it into the hands of more people.

What is it that your model, this triumvirate of organizations coming together for a solution approach, does in terms of making this data more available? What are the outputs, who can query it, and how has that had an impact in the marketplace?

LeRoy: We've been trying to get this to the end users for 30 years. I've been trying to get reports in the hands of users and let them do their own analysis, and every time I get to a point where I think this is the answer, the users are going to be able to do their own reports, that frees up guys in the IT world like me to go off and do other things, it doesn’t always work.

This time, though, it's really interesting. I think we have got it. We allow the users access directly to the data, using the tools that they already know. So I'm not going to create and introduce a new tool to them. We're using tools that are very similar to Excel, that point to a data source that’s well organized for them already and it’s the data that they are already familiar with.
This is something that we couldn't do before, and it’s very exciting to see that we're able to gain such insights and be able to take action against those insights.

So if they're using Microsoft Excel-like tools, they can do Power Pivots and pivot tables that they've already being doing, but just in an offline manner. Now, I can give them direct access to real-time data.

Instead of waiting until noon to get reports out, they can go and look online and get the data much sooner, so we can accelerate their access time to it, but deliver it in a format that they're comfortable with. That makes it easier for them to do the analysis and gain their insights without the IT people having to hold their hands.

Gardner: Perhaps we have some examples that we can look to that would illustrate some of this. You mentioned social media, the cloud-based content or data. How has that come to bear on some of these ways that your users are delivering value in terms of better healthcare analytics?

LeRoy: The best example I have is the ability to bring in data that’s not in a structured format. We often think of external data, but sometimes it’s internal data, too -- maybe x-rays or people doing queries on the Internet. I can take all of that structured data and correlate it to my internal electronic medical records or my health information systems that I have on-premise.

If I'm looking at Google searches, and people are looking for keywords such as "stress," "heart attacks," "cardiac care," or something like that, those keywords, I can map the times that people are looking at those kinds of queries by certain regions. I can tie that back to my systems and ask what the behavior or the traffic patterns look like within my facility at those same times. You can target certain areas to maybe change my staffing model, if there is a big jump in searches, do a campaign to ask people to come in and do a screening, or encourage people to get to their primary-care physicians.

There are a lot of things we can do with the data by looking just at the patterns. It will help us narrow down the areas of our coverage that we need to work with, what geographic areas I need to work on, and how I manage the operations of the organization, just by looking at the different types of data that we have and tying them together. This is something that we couldn't do before, and it’s very exciting to see that we're able to gain such insights and be able to take action against those insights.

Applying data science

Gardner: I can see now why you're calling it the Data Driven Decisions for Healthcare, because you're really applying data science to areas that would probably never have been considered for it before. People might use intuition or anecdote or deliver evidence that was perhaps not all that accurate. Maybe you could just illustrate a little bit more ways in which you're using data science and very powerful systems to gather insights into areas that we just never thought to apply such high-powered tools to before.
Converged Systems +
Analytics = Transformation
Learn More from Sogetilabs
LeRoy: Let’s go back to the beginning when we talked about how we change the quality of care that we are providing. Today, doctors collect diagnosis codes for just about every procedure that we have done. We don’t really look and see how many times those same procedures are repeated or which doctors are performing which procedures. Let’s look at the patients, too, and which patients are getting those procedures. So we can tie those diagnosis codes in a lot of different ways.

The one that I think I probably would like the best is that I want to know which doctors perform those procedures only once per patient and have the best results come from the treatments that that doctor performs. Now, if I'm from a hospital, I know which doctors perform which procedures the best and I can direct the patients that need those procedures to those doctors that provide the best care.
My quality of care goes up, the patient has a better experience, and we're going to do it a lower cost because we're only doing it once. 

And the reverse of that might be that if the doctor doesn’t perform that procedure well, let’s avoid sending him those kinds of patients. Now, my quality of care goes up, the patient has a better experience, and we're going to do it a lower cost because we're only doing it once. 

Gardner: Let’s dive into this solution a bit, because I'm intrigued by the fact that this model of bringing converged-infrastructure provider, a software provider and expertise in the field that crosses the chasm between a technology capability and a vertical industry knowledge-base works. So let’s dig in a little bit. The Microsoft APS, tell us a little bit about that -- what it includes and why it’s powerful and applicable in this situation?

LeRoy: The APS is a solution that combines unstructured data and structured data into a single environment and it allows the IT guys to run classic SQL queries against both.

On one side, we have what used to be called parallel data warehouse. It’s a really fast version of SQL Server. It's massively parallel processing and it can run queries super fast. That’s the important part. I have structured data that I can get to very quickly.

The other half of it is HDInsight, which is Microsoft's open source implementation of Hadoop. Hadoop is all unstructured data. In between these two things there is PolyBase. So I can query the two together and I can join structured and unstructured data together.

Then, since Microsoft created this APS specification, HPE then implemented that in a box that they call a ConvergedSystem 300. Sogeti has used that to build our IP against. We can consume data from all these different areas, put it into the APS, and deliver that data to an end user through a simple interface like Excel or Power BI or some other visualization tool.

Significant scale

Gardner: Just to be clear for our audience, sometimes people hear appliance and they don't think necessarily big scale, but the HPE ConvergedSystem 300 for the Microsoft APS is quite significant with server storage, networking technologies, and large amounts of data, up to 6 petabytes. So we're talking about some fairly significant amounts of data here, not small fry.

LeRoy: And they put everything into that one rack. We think of appliance as something like a toaster that we plug in. That’s pretty close to where they are, not exactly, but you drop this big rack into your data center, give it an IP address, give it some power, and now you can start to take existing data and put it in there. It runs extremely well because they've incorporated the networking and the computing platforms and the storage all within a single environment, which is really effective.

Gardner: Of course, one of the big initiatives at Microsoft has been cloud with Azure. Is there a way in which the HPE Converged Infrastructure in a data center can be used in conjunction with a cloud service like Azure or other cloud, public cloud, infrastructure-as-a-service (IaaS) cloud or even data warehousing cloud services that accelerates the ability to deliver this fast and/or makes it more inclusive or more types of data for more places? How does the public cloud fit into this?
One of the great things about the solution that Microsoft and HPE put together is it’s very much a converged system that allows us to bridge on-prem and the cloud together.

LeRoy: You can distribute the solution across that space. In fact, we take advantage of the cloud delivery as a model. We use a tool called Power BI from Microsoft that allows you to do visualizations.

The system from HPE is a hybrid solution. So we can distribute it. Some of it can be in the cloud and some of it can be on-prem. It really depends on what your needs are and how your different systems are already configured. It’s entirely flexible. We can put all of it on-prem, in a single rack or a single appliance or we can distribute it out to the cloud.

One of the great things about the solution that Microsoft and HPE put together is it’s very much a converged system that allows us to bridge on-prem and the cloud together.

Gardner: And of course, Bob, those end users that are doing those queries, that are getting insights, they probably don’t care where it's coming from as long as they can access it, it works quickly, and the costs are manageable.

LeRoy: Exactly.

Gardner: Tell me a little bit about where we take this model next -- clearly healthcare, big demand, huge opportunity to improve productivity through insights, improve outcomes, while also cutting costs.

You also have a retail solution approach in that market, in that vertical. How does that work? Is that already available? Tell us a little bit about why the retail was the next one you went to and where it might go next in terms of industries?

Four major verticals

LeRoy: Sogeti is focused on four major verticals: healthcare, retail, manufacturing, and life sciences. So we are kind of going across where we have expertise.
Converged Systems Help Transform
Healthcare and Financial Services
Learn More from Sogetilabs
The healthcare one has been out now for nine months or so. We see retailers in another place. There are point solutions where people have solved part of this equation, but they haven’t really dug deep in understanding how to get it from end to end, which is something that Sogeti has done now. From the point a person walks into a store, we would be alerted through all of these analytics that we have. We would be alerted that the person arrived and take action against that.

We do what we can to increase our traffic and our sales with individuals and then aggregate all of that data. You're looking at things like customers, inventory, or sales across an organization. That end-to-end piece is something that I think is very unique within the retail space.

After that, we're going to go to manufacturing. Everybody likes to talk about the Internet of Things (IoT) today. We're looking at some very specific use cases on how we can impact manufacturing so IoT can help us predict failures right on a manufacturing line. Or if we have maybe heavy equipment out on a job site, in a mine, or something like that, we could better predict when equipment needs to be serviced, so we can maximize the manufacturing process time.
We're looking at some very specific use cases on how we can impact manufacturing so IoT can help us predict failures right on a manufacturing line.

Gardner: Any last thoughts in terms of how people who are interested in this can acquire it? Is this something that is being sold jointly through these three organizations, through Sogeti directly? How is this going to market in terms of how healthcare organizations can either both learn more and/or even experiment with it?

LeRoy: The best way to do it is search for us online. It's mostly being driven by Sogeti and HPE. Most of the healthcare providers that are also heavy HPE users could be aware of it already, and talking to an HPE rep or to a Sogeti rep is certainly the easiest path to move forward on.

We have a number of videos that are out on YouTube. If you search for Sogeti Labs and Data Driven Decisions, you will certainly find my name and a short video that shows it. And of course sales reps and customers are welcome to contact me or anybody from Sogeti or HP.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript o download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

HPE's composable infrastructure sets stage for hybrid market brokering role

Making a global splash at its first major event since becoming its own company, Hewlett Packard Enterprise (HPE) last week positioned itself as a new kind of market maker in enterprise infrastructure, cloud, and business transformation technology.

By emphasizing choice and adaptation in hybrid and composable IT infrastructure, HPE is betting that global businesses will be seeking, over the long term, a balanced and trusted partner -- rather than a single destination or fleeting proscribed cloud model.

HPE is also betting that a competitive and still-undefined smorgasbord of cloud, mobile, data, and API service providers will vie to gain the attention of enterprises across both vertical industries and global regions. HPE can exploit these dynamic markets -- rather than be restrained by them -- by becoming a powerful advocate for enterprises sorting out the complexity of transformation across hybrid, mobile, security, and data analysis shifts.

"The most powerful weapons of competition are now software, data, and algorithms," said Peter Ryan, HPE Senior Vice President and Managing Director for EMEA. "Time to value is your biggest enemy and your biggest opportunity."

HPE led off its announcements at HPE Discover in London with a new product designed to run both traditional and cloud-native applications for organizations seeking the benefits of running a "composable" hybrid infrastructure. [Disclosure: HPE is a sponsor of BriefingsDirect podcasts.]
Time to value is your biggest enemy and your biggest opportunity.

Based on new architecture, HPE Synergy leverages fluid resource pools, software-defined intelligence, and a unified API to provide the foundation for organizations to continually optimize the right mix of traditional IT and private cloud resources. HPE also announced new partnerships with Microsoft around cloud computing and Zerto for disaster recovery.

HPE Synergy leverages a new architectural approach called Composable Infrastructure, hailed as HPE's biggest debut in a decade. In addition to nourishing dynamic IT service markets and fostering choice, HPE is emphasizing the need to move beyond manual processes for making disparate hybrid services operating well together.

The next step for businesses is to "automate and orchestrate across all of enterprise IT," said Antonio Neri, HPE Executive Vice President and General Manager of the company's Enterprise Group, to the 17,000 attendees.

"Market data clearly shows that a hybrid combination of traditional IT and private clouds will dominate the market over the next five years," said Neri. "With HPE Synergy, IT can deliver infrastructure as code and give businesses a cloud experience in their data center."

Composable choice for all apps

Composable Infrastructure via unified APIs allows IT to converge and virtualize assets while leveraging hybrid models, he said. Both developers and IT operators need to access all their resources rapidly and quickly automate their use.

HPE is striving to strike the right balance between the ability to use hybrid models and access legacy resources, while recognizing that the market will continue to rapidly advance and differ widely from region to region. It's a wise brokering role to assume, given the level of confusion and concern among IT leaders.

"What's the right formula for services at the right price with the right SLAs? It's still a work in progress," I told Trevor Jones at SearchCloudComputing at TechTarget just after the conference.
Cloud brokers can pick and choose the right requirements at the right price for their customers, so there will be a market for those services.

Indeed, HPE will offer a cloud brokerage service in early 2016 for hybrid IT management. HPE Helion Managed Cloud Broker leverages existing HP orchestration, automation, and operations software, and builds a self-service portal, monitoring dashboards and reports to better support on-premises offerings from VMware and public clouds and #PaaS from Microsoft, Amazon, and others. The service will be available sometime in 2016.

"Cloud brokers can pick and choose the right requirements at the right price for their customers, so there will be a market for those services," I told TechTarget. "I look at it like the systems integrator of cloud computing."

And brokers factor into cloud choice and hybrid choice decisions such variables as jurisdiction, industry verticals, types of workloads and mobile devices. Rather than dictate to enterprise architects what "parts" or services to use, HPE is focusing on the management and repeatability of the services that specific application sets require -- even as that changes over time.

For example, as the interest in software containers grows, HPE will automate their use. New HPE ContainerOS solves two major problems with containers -- security and manageability, said HPE CTO Martin Fink. "Ops can now fall in love with containers just as much as developers," he told the conference audience, adding that virtual machines alone are "highly inefficient."

IoT gets a new edge

In yet another IT area that enterprises need to quickly adjust to, the Internet of Things (IoT), HPE has developed a flexible solution approach. HPE Edgeline servers, part of an Intel partnership, sit at the edge of networks.

"What will make IoT work for business is not devices. It's infrastructure you build to support it," said Robert Youngjohns, Executive Vice President and General Manager, HPE Enterprise Group.


Microsoft partnership

HPE and Microsoft announced new innovation in hybrid cloud computing through Microsoft Azure, HPE infrastructure and services, and new program offerings. The extended partnership appoints Microsoft Azure as a preferred public cloud partner for HPE customers while HPE will serve as a preferred partner in providing infrastructure and services for Microsoft's hybrid-cloud offerings.

The partnering companies will collaborate across engineering and services to integrate innovative compute platforms that help customers optimize their IT environment, leverage new consumption models, and accelerate their business.
As part of the expanded partnership, HPE will enable Azure consumption and services on every HPE server, which allows customers to rapidly realize the benefits of hybrid cloud.

To simplify the delivery of infrastructure to developers, HPE Synergy, for example, has a powerful unified API and a growing ecosystem of partners like Arista, Capgemini, Chef, Docker, Microsoft, NVIDIA, and VMware. The unified API provides a single interface to discover, search, provision, update, and diagnose the Composable Infrastructure required to test, develop, and run code. With a single line of code, HPE's innovative Composable API can fully describe and provision the infrastructure that is required for applications, eliminating weeks of time-consuming scripting.

HPE and Microsoft are also introducing the first hyper-converged system with true hybrid-cloud capabilities, the HPE Hyper-Converged 250 for Microsoft Cloud Platform System Standard. Bringing together industry leading HPE ProLiant technology and Microsoft Azure innovation, the jointly engineered solution brings Azure services to customers' data centers, empowering users to choose where and how they want to leverage the cloud. An Azure management portal enables business users to self-deploy Windows and Linux workloads, while ensuring IT has central oversight.

Building on the success of HPE Quality Center and HPE LoadRunner on the Azure Marketplace, HPE and Microsoft will work together to make select HPE industry-leading application lifecycle management, big-data, and security software products available on the Azure Public Cloud.

HPE also plans to certify an additional 5,000 Azure Cloud Architects through its Global Services Practice. This will extend its Enterprise Services offerings to bring customers an open, agile, more secure hybrid cloud that integrates with Azure.

Disaster recovery with Zerto

Zerto, disaster recovery provider in virtualized and cloud environments, has achieved the gold partnership status with HPE.

The first deliverable out of the partnership is the Zerto Automated Failover Testing Pack. This is the first of several packs which will simplify BC/DR automation using HPE Operations Orchestration (HPE OO) as the master orchestrator. The new automation failover testing capabilities for HPE OO increases IT data center time savings, while improving overall disaster recovery testing compliance.
Failover tests can now run nightly versus annually, providing compliance coverage for customers operating in highly regulated industries such as financial services and healthcare.

While Zerto Automated Failover Testing Pack automatically runs failover tests in full virtual-machine environments, other automated processes eliminate the need to cross check multi-department failover success thereby increasing efficiency and productivity for IT teams.

With Zerto Automated Failover Testing Pack, users now simply schedule the failover test in HPE OO. The test runs autonomously and sends a report showing it was a successful test. Failover tests can now run nightly versus annually, providing compliance coverage for customers operating in highly regulated industries such as financial services and healthcare.

With HPE recognizing that global businesses are seeking a long-term, balanced and trusted partner -- rather than a single destination or fleeting proscribed cloud model -- the 75-year-old company has elevated itself above the cloud fray.

"Real transformation is hard, but it can have amazing benefits," HPE CEO Meg Whitman told the conference.

You may also be interested in: 

Tuesday, December 1, 2015

Nottingham Trent University elevates big data’s role to improving student retention

The next BriefingsDirect big-data case-study interview examines how Nottingham Trent University in England has devised and implemented an information-driven way to encourage higher student retention.

By gathering diverse data and information and making rapid analysis, Nottingham Trent is able to quickly identify those students having difficulties. They can thereby provide significant reductions in dropout rates while learning more about what works best to usher students into successful academic careers.

What’s more, the analysis of student metrics is also setting up the ability to measure more aspects of university life and quality of teaching, and to make valuable evidence-based correlations that may well describe what the next decades of successful higher education will look like.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about taking a new course in the use of data science in education, we're pleased to welcome Mike Day, Director of Information Systems at Nottingham Trent University in Nottingham, UK. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about Nottingham Trent University. It’s a unique institute, a very large student body and many of them attending university for the first time in their families.

Day: That’s right. We've had around 28,000 students over the last few years, and that’s probably going to increase this year to around 30,000 students. We have, as you say, many, many students who come from poor backgrounds -- what we call "widening participation" students. Many of them are first generation in their family to go to university.

Sometimes, those students are a little bit under-confident about going to university. We’ve come to call them "doubter students," and those doubters are the kinds of people that when they struggle, they believe it’s their fault, and so they typically don't ask for help.

Gardner: So it's incumbent upon you to help them know better where to look for help and not internalize that. What do you use to measure the means by which you can identify students that are struggling?

Low dropout rate

Day: We’ve always done very well in Nottingham Trent. We had a relatively low dropout rate, about seven percent or so, which is better than sector average. Nevertheless, it was really hard for us to keep students on track throughout their studies, especially those who were struggling early in their university career. We tended to find that we have to put a lot of effort into supporting students when they had failed exams, which for us, was too late.

Day
We needed to try to find a way where we could support our students as early as possible. To do that, we had to identify those students who were finding it a bit harder than the average student and were finding it quite difficult to put their hand up and say so.

So we started to look at the data footprint that a student left across the university, whether that was a smart card swipe to get them in and out of buildings or to use printers, or their use of the library, in particular taking library books out, or accessing learning materials through our learning management system. We wanted to see whether those things would give us some indication as to how well students were engaged in their studies and therefore, whether they're struggling or not.

Gardner: So this is not really structured information, not something you would go to a relational database for, part of a structured packaged application, for example. It's information that we might think of as breadcrumbs around the organization that you need to gather. So what was the challenge for dealing with such a wide diversity of information types?
Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information
Day: We had a very wide variety of information types. Some of it was structured, and we put a lot effort into getting some good quality data over the years, but some of it was unstructured. Trying to bring those different and disparate datasets together was proving very difficult to do in very traditional business intelligence (BI) ways.

We needed to know, in about 600 terabytes of data, what really mattered, what were the factors that in combination told us something about how successful students behave, and therefore something about comparing those that were not having such an easy time at the university how to compare that to those who were succeeding in it.
We needed ultimately to get to a position where we could create great relationships between people, particularly between tutors or academic counselors and individual students.

Gardner: It sounds as if the challenges were not only in the gathering of good information but in how to then use that effectively in drawing correlations that would point out where students rapidly were struggling. Tell us about both the technology side and then also the methodologies that you then use to actually create those correlations?

Day: You're absolutely right. It was very difficult to find out what matters and to get the right data for that. We needed ultimately to get to a position where we could create great relationships between people, particularly between tutors or academic counselors and individual students.

On the technology side, we engaged with a partner, that was a company called DTP Solutionpath, who brought with them the HPE IDOL engine. That allowed us to submit about five years worth of back data into the IDOL engine to try to create a model of engagement, in other words, to pick out what factors within that data in combination gave as a high confidence around student engagement.

Our partners did that. They worked very closely with us in a very collaborative way, with our academic staff, with our students, importantly -- because we have to be really clear and transparent about what we are doing in all of this, from an ethical point of view -- and with my IT technical team. And that collaboration really helped us to boil down what sorts of things really mattered.

Anonymizing details

Gardner: When you look at this ethically you have to anonymize a great deal of this data in order to adhere to privacy and other compliance issues. Is that the case?

Day: Actually, we needed to be able to identify individual students in all of this, and so there were very real privacy issues in all of this. We had to check quite carefully our legal position to make sure that we did comply with UK Data Protection Act, but that’s only a part of it.

What’s acceptable to the organization and ultimately to individual students is perhaps even more important than the strict legal position in all of this. We worked very hard to explain to students and staff what we were trying to do and to get them on board early, at the beginning of this project, before we had gone too far down the track, to understand what would be acceptable and what wouldn’t.

Gardner: I suppose it’s important to come off as a big brother and not the Big Brother in this?

Day: Absolutely. Friendly big brother is exactly what we needed to be. In fact, we found that how we engage with our student body was really important in all of this. If we try to explain this in a technical way. then it was very much Big Brother. But when we started to say, "We're trying to give you the very best possible support, such that you are most likely to succeed in your time in higher education and reap the rewards of your investment in higher education," then it became a very different story.
We worked very hard to explain to students and staff what we were trying to do and to get them on board early.

Particularly, when we were able to demonstrate the kind of visualizations of engagement to students, that shifted completely, and we've had very little, if any, problems with ethical concerns among students.

Gardner: It also seems to me that the stakes here are rather high. It's hard to put a number on it, but for a student who might struggle and drop out in their first months at university, it means perhaps a diminished potential for them over their lifetime of career, monetization of income, and contribution to society, and so forth.

So for thousands of students, this could impact them over the course of a generation. This could be a substantial return on investment (ROI), to be a bit crass and commercial about it.

Day: If you take all of this from the student’s perspective, clearly students are investing significant amounts of money in their education.

In the UK, that’s £9,000 (USD $13,760) a year at the moment, plus the accommodation costs, and the cost of not getting a job early, and all of those sorts of things that those students put into to invest in their early university career. To lose that means that they come out of the university experience being less positive than it could have been, with much, much lower earning potential over their lifetime.
Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information
That also has an impact on UK PLC, in that it isn’t perhaps generating as many skilled individuals as it might. That has implications for tax returns and also from a university point of view. Clearly if our students dropout, they aren’t paying their fees, and those slots are now empty. In terms of university efficiency, there was also a problem. So everybody wins if we can keep students on course.

On the journey

Gardner: Certainly a worthy goal. Tell us a little bit about where you are now? I think we have the vision. I think we understand the stakes and we understand some of the technologies we’ve employed. Where are you on this journey? Then, we can talk about so far what some of the results have been.

Day: It was very quick to get to a point where the technology was giving us the right kinds of answers. In about two to three months, we got into a position where the technology was pretty much done, but that was only a really part of the story. We really needed to look at how that impacted our practice in the university.

So we started to run a series of pilots into the series of courses. We did that over the course of a year about 18 months ago and we looked at every aspect of academic support for students and how this might change all of this. If we see that a student is disengaging from their studies, and we can see that now about a month or two before it otherwise would have been able to do that, we can have a very early conversation about what the problem might be.

In more than 90 percent of the cases that we have seen so far, those early conversations result in an immediate upturn in student engagement. We’ve seen some very real tangible results and we saw those very early on.
We've started to see students competing with each other to be the best engaged in their course. That’s got to be a good thing.

We expected that it would take as a considerable amount of time to demonstrate the system would give us a value at an institutional level, but actually it didn't. It took about six months or so into that pilot period that would set a year aside for to get to a position where we were convinced, as an institution ,that we roll out across the whole university. We did that at the beginning of this academic year and we rolled out about six months earlier than we thought. So we might even start thinking about that.

We now have had another year thinking about what good practice is, seeing that academic tutors are starting to share good practice among themselves. So there is a good conversation going on there. There is a much, much more positive relationship between those academic tutors and the students being reported from both the students and the tutors, we see that being very positive.

Importantly, there is also a dialogue going on between students themselves. We've started to see students competing with each other to be the best engaged in their course. That’s got to be a good thing.

Gardner: And how would they measure that? Is there some sort of a dashboard or visualization that you can provide to the students, as well as perhaps other vested interests in the ecosystem, so that they can better know where they are, where they stand?

Day: There absolutely is. The system provides a dashboard that gives a very simple visualization. It’s two lines on a chart. One of those lines is the average engagements of the cohort on a course by course basis. The other line is the individual student’s engagement compared to that average engagement in the course; in other words, comparing them with some of their peers on that.

We worked very hard to make that visualization simple, because we wanted that to be consistent. It needed to be something that prompted a conversation between tutors and students, and tutors sharing best practice with other tutors. It's a very simple visualization.

Sharing the vision

Gardner: Mike, it strikes me that other institutions of higher learning might want to take a page from what you've done. Is there some way of you sharing this or packaging it in some way, maybe even putting your stamp and name and brand on it? Have you been in discussions with other universities or higher education organizations that might want to replicate what you’ve done?

Day: Yes, we have. We're working with our supplier SolutionPath who have created now a model that is used to replicate in other universities. It starts with a readiness exercise because this is not about technology mostly. It's about how ready you are, as an organization, to address things like privacy and ethics in all of this. We've worked very closely with that.

We’ve spoken to two dozen universities already about how they might adopt something similar not necessarily exactly the same solution. We've done some work across the sector in the UK with a thing called the Joint Information Systems Committee, which looks at technology across all 150 universities in UK.
Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information
Gardner: Before we close out, I'm curious.When you’ve got the apparatus and the culture in the organization to look more discretely at data and draw correlations about things like student attainment and activities, it seems to me that we're only in the opening stages of what could be a much more data-driven approach to higher education. Where might this go next?
Research is another area where we might be able to think about how data helps us, what kind of research might we best be able to engage in.

Day: There’s no doubt at all that this solution has worked in its own right, but what it actually formed is a kind of bridgehead, which will allow us to take the principles and the approach that we have taken around the specific solution and apply to other aspects of the universities business.

For example, we might be able to start to look at which students might succeed on different courses across the university, perhaps regardless of traditional ways of recruiting students through their secondary school education qualification. It's looking at what other information might be a good indicator of success in a course.

We could start looking at the other end of the spectrum. How do students make their way into the world of work? What kinds of jobs do they get? And is this something about linking right at the beginning of a student’s university career, perhaps even at application stage, to the kinds of careers they might succeed in, and to try and advise early on those sorts of things that student might want to get involved with and engaged with. It’s a whole raft of things that we can start to think about.

Research is another area where we might be able to think about how data helps us, what kind of research might we best be able to engage in, and so on and so forth.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.
 

You may also be interested in:

Monday, November 30, 2015

Forrester analyst Kurt Bittner on the inevitability of DevOps

Businesses today want to deliver software improvements at weekly and even daily intervals, especially in SaaS environments, for mobile apps, and for cloud-based workloads. Yet those kinds of delivery speeds are inconceivable with any kind of manual software development processes.

As competitive organizations move away from quarterly software releases to faster releases, they are being forced to face the inevitable adoption of DevOps processes and efficiencies.

The next BriefingsDirect thought leadership discussion therefore explores the building interest in DevOps -- of making the development, test, and ongoing improvement in software creation a coordinated, lean, and proficient process for enterprises.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

BriefingsDirect sat down with a prominent IT industry analyst, Kurt Bittner, Principal Analyst, Application Development and Delivery at Forrester Research, to explore why DevOps is such a hot topic, and to identify steps that successful organizations are taking to make advanced applications development a major force for business success. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s start by looking at the building interest in DevOps. What’s driving that? 

Bittner: It’s essentially the end-user or client organizations as they face increasing pressure from competition and increasing expectations from customers to delivering functionality faster.

I was at a dinner the other night, and there were half a dozen or so large banks there. They were all saying, to my surprise, that they didn’t feel like they were competing with one another, but that they felt like they were competing with companies like Apple, Google, PayPal, and increasingly startup companies. Square is a good example, too.

They're getting into the payment mechanism, and that’s siphoning our business from the banks. The banks are beginning to see drops in their own bottom lines because of the competition from ... software companies. You see companies like Uber having a big impact on traditional taxi companies and transportation.

Increasing competition

So it’s essentially increasing competition, driven by increasing customer expectations. We're all part of that as consumers where we've gravitating toward our mobile smartphones. We're increasingly interacting with companies through mobile devices.

Bittner
Delivering new functionality through mobile experiences, through cloud experiences, through the web, through various kinds of payment mechanisms -- all of these things contribute to the need to deliver services much faster.

Startup companies get this and they're already adopting these techniques in large numbers. What we're finding is that traditional companies are increasingly saying, "We have to do this. This a competitive threat to us." Like Blockbuster Video, they may cease to exist if they don’t.

Gardner: Companies like Apple or Uber probably define themselves as being technology companies. That’s what they do. Software is a huge part of what makes them a successful company. It defines them. What is it that DevOps brings to the table for them and others?

Bittner: DevOps optimizes the software delivery pipeline, all the steps that you have to go through between when you have an idea and when a customer starts benefiting from that idea. In the traditional delivery processes, you have lots of hand-offs, lots of stops and starts. You have relatively inefficient processes, and it can take months -- and sometimes years -- to go from idea to having somebody get a benefit.

With DevOps, we're reducing the size of the things you're delivering, so you can deliver more frequently. Then, you can eliminate hand-offs and inefficiencies in the delivery process, so that you can deliver it as fast as possible with higher quality.

Gardner: And what was broken? What needs to be fixed? Wasn’t Agile suppose to fix this?

Bittner: Agile is part of the solution, but many Agile teams find that they'd like to be more agile. They're held back by lack of testing environments. They're held back by lack of testing automation. They're held back by lack of deployment automation. They, themselves, have lots of barriers.
Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information
So, Agile is part of the solution in the sense of involving the business more on a day-to-day basis in the project decision-making. It also provides the ability to break a problem down into smaller increments, and at least demonstrate in smaller increments, but it doesn’t actually deliver into production in smaller increments.

Other capabilities

You need to have other capabilities to do that. One illustration of how DevOps helps to accelerate Agile came in talking to a large manufacturing organization that was making the transition to Agile.

They had a problem in that they weren't able to get to development or test environments for months. IT operations processes had been set up in a very siloed way. Development and testing environments got low priority when other things were going on.

So, as much as the team wanted to work in an Agile way, they couldn’t get a rapid test environment. In effect, they were completely stopped from any forward progress. There's only so much you can do on a developer workstation.

These DevOps practices benefit Agile as well, by enabling Agile to really fully realize the promise that it’s had.
These DevOps practices benefit Agile as well, enabling Agile to really fully realize the promise that it’s had.

Gardner: Is there a change in philosophy, too, Kurt, where software is released before it's really cooked and let the environment, the real world, be their test bed, their simulation if you will? And then they do rapid iterations? Are we going to begin seeing that now, as DevOps gains ground in established traditional enterprises?

Bittner: You're right. There is a tendency toward getting functionality out there, seeing what the market says about it, and then improving. That works in certain areas. For example, Google has an internal motto that says if you're not somewhat embarrassed by your first release, you didn’t move fast enough.

But we also have to realize that we have software in our automobiles and in our aircraft, and you don’t want to put something out there into those environments that’s basically not functional.

I separate the measures of quality from measures of aesthetic qualities. The software that gets delivered early has to be high-quality. It can’t be buggy. It has to work and satisfy a certain set of needs. But there's a wide variety of variability on whether people will like it or not or whether people will use it or not.

So when organizations are delivering quickly and getting feedback from the market, they're really getting feedback on things like usability and aesthetics and not necessarily on some critical business-processing capability. Or let’s say the software in your anti-lock braking system (ABS) system in your car. You don’t want that to fail, but you might be very interested in how the climate-control system works.

That may be subject to wide variations. To get better fuel efficiency, you may be willing to sacrifice something in the air conditioner to provide better efficiency. So, it’s largely driving feedback on non-safety-critical features. That's where most organizations are focused. 

More feedback

Gardner: You mentioned feedback. That seems to be a core aspect of DevOps, more feedback between operations, the real world, the use of software, and the development  and test process. How do we compress that feedback loop -- not only for user experience, but also data coming out of an embedded system, for example -- so that we can improve? Let’s address feedback and compressing the feedback-loop.

Bittner: If you think about what traditional application releases do, they tend to bundle a lot of different features into a single release. If you think about this from a statistical perspective, that means you have a lot of independent variables. You can’t tell when something improves. You can’t tell why it improved, because you have so many variables in there.

In the feedback loop with DevOps, you want to make the increment of releases as small as possible, basically one thing at a time, and then measure the result from that, so you know that your results improve because of that one single feature.

The other thing is that we start to shift toward a more outcome-oriented software release. You're not releasing features, but you're doing things that will change a customer’s outcome. If it doesn’t change a customer’s outcome, the customer doesn’t really care.
You optimize the delivery cycle, removing waste and hand-offs to make that as fast as possible with a high degrees of automation.

So by having the increment of a release be one outcome at a time, and then measuring the result from that, you get the capabilities out there as quickly as possible. Then you can tell whether you actually improved because of what you just did. If you didn’t improve, then you stop doing that and do something else.

Gardner: Is that what you mean by continuous delivery, these iterative small parts, rather than the whole big dump every six to 12 months?

Bittner: That’s a big part of it. Continuous delivery is also, more precisely, a process by which you make small changes. You optimize the delivery cycle, removing waste and hand-offs to make that as fast as possible with a high degrees of automation, so that you can get out there and get the feedback as quickly as possible.

So, it’s a combination. It needs not just fast delivery, but a number of techniques that are used to improve that delivery.

Gardner: Folks listening and reading this might very well like the idea of DevOps: "I'd like to do DevOps; where do I buy it?" DevOps, though, isn't really a product, a box, or a download. It’s a way of thinking in a methodological approach. How people go about implementing DevOps? Where do you start?
Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information
Bittner: You’re right. It's more of a philosophy than a product. It’s not even really a product category, but a bunch of different products, and processes, and to some degree, a philosophy behind that. When we talk to organizations that implemented this successfully, there are a couple of patterns.

First of all, you don't implement DevOps across an entire organization all at once. It tends to happen product by product, team by team. It happens first in the applications that are very customer-facing, because that's where the most pressure is right now. That’s where the biggest benefit is. So on the team-by-team basis, first of all you have to have some executive mandate to make a change. Somebody has to feel like this is important enough to the company.

While developers, engineers, and IT Ops people can be passionate about this, it typically requires executive leadership to get this to happen, because these changes cut across traditional organizational silos. Without some executive sponsorship, these initiatives tend not to go very far.
There's too much wait time when people are assigned to multiple projects or multiple applications.

The first step – and this is sort of very mundane area -- tends to be changing the way that environments are provisioned. That includes getting environments provisioned on-demand, using techniques like infrastructure-as-code to automatically generate environments based on configuration settings so that you can have an environment anytime you need it. That removes a lot of friction and a lot of delays.

The second thing that tends to be implemented are techniques like continuous integration and then, after that, test automation, based on APIs. There's a shift to APIs on an integrated architecture for the applications, and then usually deployment automation comes after that. Once you have environments provisioned in code that you can put into those environments, you need a way to move that code between environments.

As you make those changes, you start to run into organizational barriers, silos in the organization, that prevent effectively working together. There's too much wait-time when people are assigned to multiple projects or multiple applications.

There's a shift in team structure to become more product-oriented with dedicated resources to a product, so that you can release, and do release after release most effectively. That tends to break the organization silos down and start shifting to a more product-centric organization and away from a functionally oriented organization.

All of those changes together typically take years, but it usually starts with some sort of executive mandate, then environment provisioning, and so on.

Management capability

Gardner: It sounds, too, that it's important to have better management capabilities across these silos -- with metrics, dashboards, validating efforts, of being able to measure discretely what's going on, and then reinforce the good and discard the bad.

Are there any particular existing ways of doing that? I'm thinking about the long-term application lifecycle management (ALM) marketplace. Does that lend itself to DevOps? Should we start from scratch and create a new management layer, if you will, across the whole continuum of software design, test, and delivery?

Bittner: It’s a little bit of both. DevOps is really an outgrowth of ALM, and all of the aspects of ALM are there. You need to be able to manage the work, track the work, and to determine what work got done. In addition to that, you’re adding automation in the areas that I was just describing; environment provisioning, continuous integration, test automation, and deployment automation.

There's another component that becomes really important, because out of those applications, you want to start gathering customer experience data. So things like operational and application analytics are important to start measuring the customer experience.
You don’t find one DevOps suite from one company that provides everything.

Combining all of those into a single view, single dashboard is evolving now. The ALM tools are evolving in that direction, and there are ways of visualizing that. But right now it tends to be a multi-vendor ecosystem. You don’t find one DevOps suite from one company that provides everything.

But the good news is that the same thing that’s been happening in the rest of the industry around services and interoperability has happened in applications. We have a high degree of interoperability between tools from different vendors today that allows you to customize this delivery pipeline to give you the DevOps capability.

Gardner: It seems that, in some ways, the prominence of hybrid cloud models, mobile, and mobile-first thinking, when it comes to development, are accelerants to DevOps. If you have that multiple cloud goal, you're going to want to standardize on your production environment. Hence, also the interest in containers these days. And, of course, mobile-first forces you to think about user experience, small iterations apps, rather than applications. Do you see an acceleration from these other trends reinforcing DevOps?
Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information
Bittner: It’s both reinforcing it and, to some degree, causing it, because it's mobile that’s triggered this explosion and the need for DevOps -- the need for faster delivery. To a large degree, the mobile application is the proverbial tip of the iceberg. Very few mobile applications stand alone. They all have very rich services running behind them. They have systems of record providing the data. Virtually every mobile application is really a composite application with some parts in the cloud and some parts in traditional data centers.

The development across all of those different code lines and the coordination of releases across all those different code lines really requires the DevOps approach to be able to do that successfully.

Demand and complexity

So it's both demand created by higher customer expectations from mobile customers, but also the complexity of delivering these applications in a really rapid way across all those different platforms. You made an interesting point about cloud and containers being both drivers for demand and also enablers, but they're also changing the nature of the work.

As containers and microservices become more prevalent -- we’re seeing growth in those areas -- it's increasing the complexity of application delivery. It simplifies the deployment, but it increases the complexity. Now, instead of having to coordinate dozens of moving parts, you have to coordinate hundreds and, we think, in the future, thousands of moving parts. That's well beyond what somebody can do with spreadsheets and manual management techniques.

The other thing is that cloud simplifies environment provisioning tremendously and it provides this great elastic infrastructure for deploying applications. But it also simplifies it by standardizing environments, making it all software configurable. It's a tremendous benefit to delivering applications faster and it gives you much more flexibility than traditional data-center applications. There's definitely movement toward those kind of applications, especially for DevOps.
Cloud simplifies environment provisioning tremendously and it provides this great elastic infrastructure for deploying applications.

Gardner: When I heard you mention the complexity, it certainly sounds like automating and moving away from manual processes, standardizing processes across your development test-to-deploy continuum, would be really important steps to take.

Bittner: Absolutely. I would say more than important. It’s absolutely essential that, without automation and that data-driven visibility into what's happening in the applications, there's almost no way to deliver these applications at speed. We find that many organizations are releasing quarterly now, not necessarily the same app every quarter, but they have a quarterly release cycle. At quarterly rates of speed, through seat of the pants and sort of brute force, you can manage to get that release out. It’s pretty painful, but you can survive.

If you turn up the clock rate faster than that and try to get down to monthly, those manual processes completely fall apart. We have organizations today that want to be delivering at weekly and daily intervals, especially in SaaS-based environments or cloud-based environments. Those kinds of delivery speeds are inconceivable with any kind of manual processes. As organizations move away from quarterly releases to faster releases, they have to adopt these techniques.

Gardner: Listening to you Kurt, it sounds like DevOps isn't another buzzword or another flashy marketing term. It really sounds inevitable, if you're going to succeed in software.

Bittner: It is inevitable, and over the next five years, what we’ll see is that the word itself will probably fade, because it will simply become the way that organizations work.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: