Friday, January 6, 2017

How lastminute.com uses machine learning to improve travel bookings user experience

The next BriefingsDirect Voice of the Customer digital transformation case study highlights how online travel and events pioneer lastminute.com leverages big-data analytics with speed at scale to provide business advantages to online travel services.

We'll explore how lastminute.com manages massive volumes of data to support cutting-edge machine-learning algorithms to allow for speed and automation in the rapidly evolving global online travel research and bookings business.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how a culture of IT innovation helps make highly dynamic customer interactions for online travel a major differentiator, we're joined by Filippo Onorato, Chief Information Officer at lastminute.com group in Chiasso, Switzerland. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Most people these days are trying to do more things more quickly amid higher complexity. What is it that you're trying to accomplish in terms of moving beyond disruption and being competitive in a highly complex area?
Join myVertica
To Get The Free
HPE Vertica Community Edition
Onorato: The travel market -- and in particular the online travel market -- is a very fast-moving market, and the habits and behaviors of the customers are changing so rapidly that we have to move fast.

Disruption is coming every day from different actors ... [requiring] a different way of constructing the customer experience. In order to do that, you have to rely on very big amounts of data -- just to style the evolution of the customer and their behaviors.

Gardner: And customers are more savvy; they really know how to use data and look for deals. They're expecting real-time advantages. How is the sophistication of the end user impacting how you work at the core, in your data center, and in your data analysis, to improve your competitive position?

Onorato
Onorato: Once again, customers are normally looking for information, and providing the right information at the right time is a key of our success. The brand we came from was called Bravofly and Volagratis in Italy; that means "free flight." The competitive advantage we have is to provide a comparison among all the different airline tickets, where the market is changing rapidly from the standard airline behavior to the low-cost ones. Customers are eager to find the best deal, the best price for their travel requirements.

So, the ability to construct their customer experience in order to find the right information at the right time, comparing hundreds of different airlines, was the competitive advantage we made our fortune on.

Gardner: Let’s edify our listeners and reader a bit about lastminute.com. You're global. Tell us about the company and perhaps your size, employees, and the number of customers you deal with each day.

Most famous brand

Onorato: We are 1,200 employees worldwide. Lastminute.com, the most famous brand worldwide, was acquired by the Bravofly Rumbo Group two years ago from Sabre. We own Bravofly; that was the original brand. We own Rumbo; that is very popular in Spanish-speaking markets. We own Volagratis in Italy; that was the original brand. And we own Jetcost; that is very popular in France. That is actually a metasearch, a combination of search and competitive comparison between all the online travel agencies (OTAs) in the market.

We span across 40 countries, we support 17 languages, and we help almost 10 million people fly every year.

Gardner: Let’s dig into the data issues here, because this is a really compelling use-case. There's so much data changing so quickly, and sifting through it is an immense task, but you want to bring the best information to the right end user at the right time. Tell us a little about your big-data architecture, and then we'll talk a little bit about bots, algorithms, and artificial intelligence.

Onorato: The architecture of our system is pretty complex. On one side, we have to react almost instantly to the search that the customers are doing. We have a real-time platform that's grabbing information from all the providers, airlines, other OTAs, hotel provider, bed banks, or whatever.

We concentrate all this information in a huge real-time database, using a lot of caching mechanisms, because the speed of the search, the speed of giving result to the customer is a competitive advantage. That's the real-time part of our development that constitutes the core business of our industry.

Gardner: And this core of yours, these are your own data centers? How have you constructed them and how do you manage them in terms of on-premises, cloud, or hybrid?

Onorato: It's all on-premises, and this is our core infrastructure. On the other hand, all that data that is gathered from the interaction with the customer is partially captured. This is the big challenge for the future -- having all that data stored in a data warehouse. That data is captured in order to build our internal knowledge. That would be the sales funnel.
Right now, we're storing a short history of that data, but the goal is to have two years worth of session data.

So, the behavior of the customer, the percentage of conversion in each and every step that the customer does, from the search to the actual booking. That data is gathered together in a data warehouse that is based on HPE Vertica, and then, analyzed in order to find the best place, in order to optimize the conversion. That’s the main usage of the date warehouse.

On the other hand, what we're implementing on top of all this enormous amount of data is session-related data. You can imagine how much a data single interaction of a customer can generate. Right now, we're storing a short history of that data, but the goal is to have two years' worth of session data. That would be an enormous amount of data.

Gardner: And when we talk about data, often we're concerned about velocity and volume. You've just addressed volume, but velocity must be a real issue, because any change in a weather issue in Europe, for example, or a glitch in a computer system at one airline in North America changes all of these travel data points instantly.

Unpredictable events

Onorato: That’s also pretty typical in the tourism industry. It's a very delicate business, because we have to react to unpredictable events that are happening all over the world. In order to do a better optimization of margin, of search results, etc, we're also applying some machine-learning algorithm, because a human can't react so fast to the ever-changing market or situation.

In those cases, we use optimization algorithms in order to fine tune our search results, in order to better deal with a customer request, and to propose the better deal at the right time. In very simple terms, that's our core business right now.

Gardner: And Filippo, only your organization can do this, because the people with the data on the back side can’t apply the algorithm; they have only their own data. It’s not something the end user can do on the edge, because they need to receive the results of the analysis and the machine learning. So you're in a unique, important position. You're the only one who can really apply the intelligence, the AI, and the bots to make this happen. Tell us a little bit about how you approached that problem and solved it.
Join myVertica
To Get The Free
HPE Vertica Community Edition
Onorato: I perfectly agree. We are the collector of an enormous amount of product-related information on one side. On the other side, what we're collecting are the customer behaviors. Matching the two is unique for our industry. It's definitely a competitive advantage to have that data.

Then, what you do with all those data is something that is pushing us to do continuous innovation and continuous analysis. By the way, I don't think something can be implemented without a lot of training and a lot of understanding of the data.

Just to give you an example, what we're implementing, the machine learning algorithm that is called multi-armed bandit, is kind of parallel testing of different configurations of parameters that are presented to the final user. This algorithm is reacting to a specific set of conditions and proposing the best combination of order, visibility, pricing, and whatever to the customer in order to satisfy their research.

What we really do in that case is to grab information, build our experience into the algorithm, and then optimize this algorithm every day, by changing parameters, by also changing the type of data that we're inputting into the algorithm itself.
It's endless, because the market conditions are changing and the actors in the market are changing as well.

So, it’s an ongoing experience; it’s an ongoing study. It's endless, because the market conditions are changing and the actors in the market are changing as well, coming from the two operators in the past, the airline and now the OTA. We're also a metasearch, aggregating products from different OTAs. So, there are new players coming in and they're always coming closer and closer to the customer in order to grab information on customer behavior.

Gardner: It sounds like you have a really intense culture of innovation, and that's super important these days, of course. As we were hearing at the HPE Big Data Conference 2016, the feedback loop element of big data is now really taking precedence. We have the ability to manage the data, to find the data, to put the data in a useful form, but we're finding new ways. It seems to me that the more people use our websites, the better that algorithm gets, the better the insight to the end user, therefore the better the result and user experience. And it never ends; it always improves.

How does this extend? Do you take it to now beyond hotels, to events or transportation? It seems to me that this would be highly extensible and the data and insights would be very valuable.

Core business

Onorato: Correct. The core business was initially the flight business. We were born by selling flight tickets. Hotels and pre-packaged holidays was the second step. Then, we provided information about lifestyle. For example, in London we have an extensive offer of theater, events, shows, whatever, that are aggregated.

Also, we have a smaller brand regarding restaurants. We're offering car rental. We're giving also value-added services to the customer, because the journey of the customer doesn't end with the booking. It continues throughout the trip, and we're providing information regarding the check-in; web check-in is a service that we provide. There are a lot of ancillary businesses that are making the overall travel experience better, and that’s the goal for the future.

Gardner: I can even envision where you play a real-time concierge, where you're able to follow the person through the trip and be available to them as a bot or a chat. This edge-to-core capability is so important, and that big data feedback, analysis, and algorithms are all coming together very powerfully.

Tell us a bit about metrics of success. How can you measure this? Obviously a lot of it is going to be qualitative. If I'm a traveler and I get what I want, when I want it, at the right price, that's a success story, but you're also filling every seat on the aircraft or you're filling more rooms in the hotels. How do we measure the success of this across your ecosystem?
We can jump from one location to another very easily, and that's one of the competitive advantages of being an OTA.

Onorato: In that sense, we're probably a little bit farther away from the real product, because we're an aggregator. We don’t have the risk of running a physical hotel, and that's where we're actually very flexible. We can jump from one location to another very easily, and that's one of the competitive advantages of being an OTA.

But the success overall right now is giving the best information at the right time to the final customer. What we're measuring right now is definitely the voice of the customer, the voice of the final customer, who is asking for more and more information, more and more flexibility, and the ability to live an experience in the best way possible.
Join myVertica
To Get The Free
HPE Vertica Community Edition
So, we're also providing a brand that is associated with wonderful holidays, having fun, etc.

Gardner: The last question, for those who are still working on building out their big data infrastructure, trying to attain this cutting-edge capability and start to take advantage of machine learning, artificial intelligence, and so forth, if you could do it all over again, what would you tell them, what would be your advice to somebody who is merely more in the early stages of their big data journey?

Onorato: It is definitely based on two factors -- having the best technology and not always trying to build your own technology, because there are a lot of products in the market that can speed up your development.

And also, it's having the best people. The best people is one of the competitive advantages of any company that is running this kind of business. You have to rely on fast learners, because market condition are changing, technology is changing, and the people needs to train themselves very fast. So, you have to invest in people and invest in the best technology available.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Monday, December 19, 2016

Veikkaus digitally transforms as it emerges as new combined Finnish national gaming company

The next BriefingsDirect Voice of the Customer digital transformation case study highlights how newly combined Finnish national gaming company, Veikkaus, is managing a complex merger process while also bringing more of a digital advantage to both its operations and business model. We'll now explore how Veikkaus uses a power big-data analytics platform to respond rapidly to the challenges of digitization.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.
 
To learn how a culture of IT innovation is helping to establish a single wholly nationally-owned company to operate gaming and gambling in Finland, we're joined by Harri Räsänen, Information and Communications Technology Architect at Veikkaus in Helsinki. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why has Veikkaus reinvented its data infrastructure technology?

Räsänen: Our data warehouse solution was a traditional data warehouse, and had been around for 10 years. Different things had gone wrong. One of the key issues we faced was that our data wasn’t real-time. It was far from real time -- it was data that was one or two days old.

We decided that we need to get data quicker and in more detail because we now had aggregate data.

Gardner: What were some of your top requirements technically in order to accomplish that?

Real-time data

Räsänen: As I said, we had quite a old-fashioned data warehouse. Initially, we needed our game-service provider to feed us data more in real-time. They needed to build up a mechanism to complete data, and we needed to build out capabilities to gather it. We needed to rethink the information structure -- totally from scratch.

Räsänen
Gardner: When we think about gambling, gaming, or lotteries, in many cases, this is an awful lot of data, a very big undertaking. Give us a sense of the size of the data and the disparity of the three organizations that came together including the Finnish national football gaming reorganization.

Räsänen: I'll talk about our current-situation records, for the new combined company we are starting up in 2017.

We have a big company from a customer point of view. We have 1.8 million consumers. Finland has a population of 5.5 million. So, we have a quite a lot of Finnish consumers. When it comes to transactions, we get one to three million transactions per day. So it’s quite large, if you think about the transactional data.

In addition to that, we gather different kinds of information about our web store; it’s one of the biggest retail web stores in Finland.

Gardner: It’s one thing to put in a new platform, but it’s another to then change the culture and the organization -- and transform into a digital business. How is the implementation of your new data environment aiding in your cultural shift?

Räsänen: Luckily, Veikkaus has a background of doing things quite analytically. If you think about a web store, there is a culture that we need to be able to monitor what we're doing if we're running some changes in our web store -- whether it works or not. That’s a good thing.
Join myVertica
To Get The Free
HPE Vertica Community Edition
But, we are redoing our whole data technology. We added the Apache Kafka integration point and then, Cloudera, the Hadoop system. Then, we added a new ETL tool for us, Pentaho, and last but not least, HPE Vertica. It's been really challenging for us, with lots of different things to consider and learn.

Luckily, we've been able to use good external consultants to help us out, but as you said, we can always make the technology work better. In transforming the culture of doing things, we're still definitely in the middle of our journey.

Gardner: I imagine you'll want to better analyze what takes place within your organization so it’s not just serving the data and managing the transactions. There's an opportunity to have a secondary benefit, which is more control of your data. The more insight you have allows you to adapt and improve your customer experience and customer service. Have you been able to start down that path of that secondary analysis of what goes on internally?

New level of data

Räsänen: Some of our key data was even out of our hands in our service-provider environments. We wanted to get all the relevant data with us, and now we've been working on that new level of data access. We have analysts working on that, both IT and business people, browsing the data. They already have some findings on things that previously they could have asked or even thought about. So, we have been getting our information up-to-date.

Gardner: Can you give us more specific examples of how you've been able to benefit from this new digital environment?

Räsänen: Yeah, consumer communication on CRM is one of the key successes, things we needed to have in place. We've been able to constantly improve on that. Before, we had data that was too old, but now, we have near real-time data. We get one-minute-old data, so we can communicate with the consumers better. We know whether they've been playing their lotteries or betting on football matches.

We can say, "It’s time for football today, and you haven’t yet placed a bet." We can communicate, and on the other hand, we can avoid disturbing customers by sending out e-mails or SMS messages about things they've already done.
Join myVertica
To Get The Free
HPE Vertica Community Edition
Gardner: Yes, less spam, but more help. It’s important, of course, with any organization like this in a government environment, for trust and safety to be involved. I should think that there's some analysis to help keep people from overdoing it and managing the gaming for a positive outcome.

Räsänen: Definitely. That’s one of the key metrics we're measuring with our consumer so that gaming is responsible. We need to see that all things they do can be thought of as good, because as you said, we're a national company, it’s a very regulated market, and that kind of thing.

Gardner: But a great deal of good comes from this. I understand that more than 1 billion euros a year go to the common good of people living in Finland. So, there are a lot of benefits when this is done properly.

Now that you've gone quite a ways into this, and you're going to need to be going to the new form and new organization the first of 2017, what advice would you be able to give to someone who is beginning a big data consolidation and modernization journey? What lessons have you learned that you might share?

Out of the box

Räsänen: If you're experimenting, you need to start to think a little bit out of the box. Integration is one of crucial part, and avoid all direct integration as much as possible.

We're utilizing Apache Kafka as an integration point, and that’s one of the crucial things, because then you can "appify" everything. You're going to provide an application interface for integrating systems and that will help those of us in gaming companies.

Gardner: A lot a services-orientation?

Räsänen: That’s one of the components of our data architecture. We have been using our Cloudera Hadoop system for archiving and we are building our capabilities on top of that. In addition, of course, we have HPE Vertica. It’s one of our most crucial things in our data ecosystem because it’s a traditional enterprise data warehousing in that sense it is a SQL database. Users can take a benefit out of that, and it’s lightning-fast. You need to design all the components and make those work on that role that they are based at.
Join myVertica
To Get The Free
HPE Vertica Community Edition
Gardner: And of course SQL is very commonly understood as the query language. There's no great change there, but it's really putting it into the hands of more people.

Räsänen: I've been writing or talking in SQL since the beginning of the ’90s, and it’s actually a pretty easy language to communicate, even between business and IT, because at least, at some level, it’s self-explanatory. That’s where the communication matters.

Gardner: Just a much better engine under the hood, right?

Räsänen: Yeah, exactly.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  
download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Wednesday, December 14, 2016

WWT took an enterprise Tower of Babel and delivered comprehensive intelligent search

The next BriefingsDirect Voice of the Customer digital transformation case study highlights how World Wide Technology, known as WWT, in St. Louis, found itself with a very serious yet somehow very common problem -- users simply couldn’t find relevant company content.

We'll explore how WWT reached deep into its applications, data, and content to rapidly and efficiently create a powerful Google-like, pan-enterprise search capability. Not only does it search better and empower users, the powerful internal index sets the stage for expanded capabilities using advanced analytics to engender a more productive and proactive digital business culture.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how WWT took an enterprise Tower of Babel and delivered cross-applications intelligent search are James Nippert, Enterprise Search Project Manager, and Susan Crincoli, Manager of Enterprise Content, both at World Wide Technology in St. Louis. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems pretty evident that the better search you have in an organization, the better people are going to find what they need as they need it. What holds companies back from delivering results like people are used to getting on the web?

Nippert
Nippert:  It’s the way things have always been. You just had to drill down from the top level. You go to your Exchange, your email, and start there. Did you save a file here? "No, I think I saved it on my SharePoint site," and so you try to find it there, or maybe it was in a file directory.

Those are the steps that people have been used to because it’s how they've been doing it their entire lives, and it's the nature of beast as we bring more and more enterprise applications into the fold. You have enterprises with 100 or 200 applications, and each of those has its own unique data silos. So, users have to try to juggle all of these different content sources where stuff could be saved. They're just used to having to dig through each one of those to try to find whatever they’re looking for.

Gardner: And we’ve all become accustomed to instant gratification. If we want something, we want it right away. So, if you have to tag something, or you have to jump through some hoops, it doesn’t seem to be part of what people want. Susan, are there any other behavioral parts of this?

Find the world

Crincoli: We, as consumers, are getting used to the Google-like searching. We want to go to one place and find the world. In the information age, we want to go to one place and be able to find whatever it is we’re looking for. That easily transfers into business problems. As we store data in myriad different places, the business user also wants the same kind of an interface.

Crincoli
Gardner: Certain tools that can only look at a certain format or can only deal with certain tags or taxonomy are strong, but we want to be comprehensive. We don’t want to leave any potentially powerful crumbs out there not brought to bear on a problem. What’s been the challenge when it comes to getting at all the data, structured, unstructured, in various formats?

Nippert: Traditional search tools are built off of document metadata. It’s those tags that go along with records, whether it’s the user who uploaded it, the title, or the date it was uploaded. Companies have tried for a long time to get users to tag with additional metadata that will make documents easier to search for. Maybe it’s by department, so you can look for everything in the HR Department.

At the same time, users don’t want to spend half an hour tagging a document; they just want to load it and move on with their day. Take pictures, for example. Most enterprises have hundreds of thousands of pictures that are stored, but they’re all named whatever number the camera gave, and they will name it DC0001. If you have 1,000 pictures named that you can't have a successful search, because no search engine will be able to tell just by that title -- and nothing else -- what they want to find.

Gardner: So, we have a situation where the need is large and the paybacks could be large, but the task and the challenge are daunting. Tell us about your journey. What did you do in order to find a solution?

Nippert: We originally recognized a problem with our on-premises Microsoft SharePoint environment. We were using an older version of SharePoint that was running mostly on metadata, and our users weren’t uploading any metadata along with their internet content.
Your average employee can spend over an entire work week per year searching for information or documentation that they need to get their job done.

We originally set out to solve that issue, but then, as we began interviewing business users, we understood very quickly that this is an enterprise-scale problem. Scaling out even further, we found out it’s been reported that as much as 10 percent of staffing costs can be lost directly to employees not being able to find what they're looking for. Your average employee can spend over an entire work week per year searching for information or documentation that they need to get their job done.

So it’s a very real problem. WWT noticed it over the last couple of years, but as there is the velocity in volume of data increase, it’s only going to become more apparent. With that in mind, we set out to start an RFI process for all the enterprise search leaders. We used the Gartner Magic Quadrants and started talks with all of the Magic Quadrant leaders. Then, through a down-selection process, we eventually landed on HPE.

We have a wonderful strategic partnership with them. It wound up being that we went with the HPE IDOL tool, which has been one of the leaders in enterprise search, as well as big data analytics, for well over a decade now, because it has very extensible platform, something that you can really scale out and customize and build on top of. It doesn’t just do one thing.
Humanizes Machine Learning
For Big Data Success
Gardner: And it’s one solution to let people find what they're looking for, but when you're comprehensive and you can get all kinds of data in all sorts of apps, silos and nooks and crannies, you can deliver results that the searching party didn’t even know was there. The results can be perhaps more powerful than they were originally expecting.

Susan, any thoughts about a culture, a digital transformation benefit, when you can provide that democratization of search capability, but maybe extended into almost analytics or some larger big-data type of benefit?

Multiple departments

Crincoli: We're working across multiple departments and we have a lot of different internal customers that we need to serve. We have a sales team, business development practices, and professional services. We have all these different departments that are searching for different things to help them satisfy our customers’ needs.

With HPE being a partner, where their customers are our customers, we have this great relationship with them. It helps us to see the value across all the different things that we can bring to bear to get all this data, and then, as we move forward, what we help people build more relevant results.

If something is searched for one time, versus 100 times, then that’s going to bubble up to the top. That means that we're getting the best information to the right people in the right amount of time. I'm looking forward to extending this platform and to looking at analytics and into other platforms.
That means that we're getting the best information to the right people in the right amount of time.

Gardner: That’s why they call it "intelligent search." It learns as you go.

Nippert: The concept behind intelligent search is really two-fold. It first focuses on business empowerment, which is letting your users find whatever it is specifically that they're looking for, but then, when you talk about business enablement, it’s also giving users the intelligent conceptual search experience to find information that they didn’t even know they should be looking for.

If I'm a sales representative and I'm searching for company "X," I need to find any of the Salesforce data on that, but maybe I also need to find the account manager, maybe I need to find professional services’ engineers who have worked on that, or maybe I'm looking for documentation on a past project. As Susan said, that Google-like experience is bringing that all under one roof for someone, so they don’t have to go around to all these different places; it's presented right to them.

Gardner: Tell us about World Wide Technology, so we understand why having this capability is going to be beneficial to your large, complex organization?
Humanizes Machine Learning
For Big Data Success
Crincoli: We're a $7-billion organization and we have strategic partnerships with Cisco, HPE, EMC, and NetApp, etc. We have a lot of solutions that we bring to market. We're a solution integrator and we're also a reseller. So, when you're an account manager and you're looking across all of the various solutions that we can provide to solve the customer’s problems, you need to be able to find all of the relevant information.

You probably need to find people as well. Not only do I need to find how we can solve this customer’s problem, but also who has helped us to solve this customer’s problem before. So, let me find the right person, the right pre-sales engineer or the right post-sales engineer. Or maybe there's somebody in professional services. Maybe I want the person who implemented it the last time. All these different people, as well as solutions that we can bring in help give that sales team the information they need right at their fingertips.

It’s very powerful for us to think about the struggles that a sales manager might have, because we have so many different ways that we can help our customer solve those problems. We're giving them that data at their fingertips, whether that’s from Salesforce, all the way through to SharePoint or something in an email that they can’t find from last year. They know they have talked to somebody about this before, or they want to know who helped me. Pulling all of that information together is so powerful.

We don’t want them to waste their time when they're sitting in front of a customer trying to remember what it was that they wanted to talk about.

Gardner: It really amounts to customer service benefits in a big way, but I'm also thinking this is a great example of how, when you architect and deploy and integrate properly on the core, on the back end, that you can get great benefits delivered to the edge. What is the interface that people tend to use? Is there anything we can discuss about ease of use in terms of that front-end query?

Simple and intelligent

Nippert: As far as ease of use goes, it’s simplicity. If you're a sales rep or an engineer in the field, you need to be able to pull something up quickly. You don’t want to have to go through layers and layers of filtering and drilling down to find what you're looking for. It needs to be intelligent enough that, even if you can’t remember the name of a document or the title of a document, you ought to be able to search for a string of text inside the document and it still comes back to the top. That’s part of the intelligent search; that’s one of the features of HPE IDOL.

Whenever you're talking about front-end, it should be something light and something fast. Again, it’s synonymous with what users are used to on the consumer edge, which is Google. There are very few search platforms out there that can do it better. Look at the  Google home page. It’s a search bar and two buttons; that’s all it is. When users are used to that at home and they come to work, they don’t want a cluttered, clumsy, heavy interface. They just need to be able to find what they're looking for as quickly and simply as possible. 

Gardner: Do you have any examples where you can qualify or quantify the benefit of this technology and this approach that will illustrate why it’s important?
It’s gotten better at finding everything from documents to records to web pages across the board; it’s improving on all of those.

Nippert: We actually did a couple surveys, pre- and post-implementation. As I had mentioned earlier, it was very well known that our search demands weren't being met. The feedback that we heard over and over again was "search sucks." People would say that all the time. So, we tried to get a little more quantification around that with some surveys before and after the implementation of IDOL search for the enterprise. We got a couple of really great numbers out of it. We saw that people’s satisfaction with search went up by about 30 percent with overall satisfaction. Before, it was right in the middle, half of them were happy, half of them weren’t.

Now, we're well over 80 percent that have overall satisfaction with search. It’s gotten better at finding everything from documents to records to web pages across the board; it’s improving on all of those. As far as the specifics go, the thing we really cared about going into this was, "Can I find it on the first page?" How often do you ever go to the second page of search results.

With our pre-surveys, we found that under five percent of people were finding it on the first page. They had to go to second or third page or four through 10. Most of the users just gave up if it wasn’t on the first page. Now, over 50 percent of users are able to find what they're looking for on the very first page, and if not, then definitely the second or third page.

We've gone from a completely unsuccessful search experience to a valid successful search experience that we can continue to enhance on.

Crincoli: I agree with James. When I came to the company, I felt that way, too -- search sucks. I couldn’t find what I was looking for. What’s really cool with what we've been able to do is also review what people are searching for. Then, as we go back and look at those analytics, we can make those the best bets.

If we see hundreds of people are searching for the same thing or through different contexts, then we can make those the best bets. They're at the top and you can separate those things out. These are things like the handbook or PTO request forms that people are always searching for.

Gardner: I'm going to just imagine that if I were in the healthcare, pharma, or financial sectors, I'd want to give my employees this capability, but I'd also be concerned about proprietary information and protection of data assets. Maybe you're not doing this, but wonder what you know about allowing for the best of search, but also with protection, warnings, and some sort of governance and oversight. 

Governance suite

Nippert: There is a full governance suite built in and it comes through a couple of different features. One of the main ones is induction, where as IDOL scans through every single line of a document or a PowerPoint slide of a spreadsheet whatever it is, it can recognize credit card numbers, Social Security numbers anything that’s personally identifiable information (PII) and either pull that out, delete it, send alerts, whatever.

You have that full governance suite built in to anything that you've indexed. It also has a mapped security engine built in called Omni Group, so it can map the security of any content source. For example, in SharePoint, if you have access to a file and I don’t and if we each ran a search, you would see a comeback in the results and I wouldn’t. So, it can honor any content’s security.  

Gardner: Your policies and your rules are what’s implemented, and that’s how it goes?

Nippert: Exactly. It is up to as the search team or working with your compliance or governance team to make sure that that does happen.

Gardner: As we think about the future and the availability for other datasets to be perhaps brought in, that search is a great tool for access to more than just corporate data, enterprise data and content, but maybe also the front-end for some advanced querying analytics, business intelligence (BI), has there been any talk about how to take what you are doing in enterprise search and munge that, for lack of a better word, with analytics BI and some of the other big data capabilities.
It is going to be something that we can continue to build on top of, as well and come up with our own unique analytic solutions.

Nippert: Absolutely. So HPE has just recently released BI for Human Intelligence (BIFHI), which is their new front end for IDOL and that has a ton of analytics capabilities built into it that really excited to start looking at a lot of rich text, rich media analytics that can pull the words right off the transcript of an MP4 raw video and transcribe it at the same time. But more than that, it is going to be something that we can continue to build on top of, as well and come up with our own unique analytic solutions.

Gardner: So talk about empowering your employees. Everybody can become a data scientist eventually, right, Susan?

Crincoli: That’s right. If you think about all of the various contexts, we started out with just a few sources, but we also have some excitement because we built custom applications, both for our customers and for our internal work. We're taking that to the next level with building an API and pulling that data into the enterprise search that just makes it even more extensible to our enterprise.

Gardner: I suppose the next step might be the natural language audio request where you would talk to your PC, your handheld device, and say, "World Wide Technology feed me this," and it will come back, right?

Nippert: Absolutely. You won’t even have to lift a finger anymore.

Cool things

Crincoli: It would be interesting to loop in what they are doing with Cortana at Microsoft and some of the machine learning and some of the different analytics behind Cortana. I'd love to see how we could loop that together. But those are all really cool things that we would love to explore.

Gardner: But you can’t get there until you solve the initial blocking and tackling around content and unstructured data synthesized into a usable format and capability.
Humanizes Machine Learning
For Big Data Success
Nippert: Absolutely. The flip side of controlling your data sources, as we're learning, is that there are a lot of important data sources out there that aren’t good candidates for enterprise search whatsoever. When you look at a couple of terabytes or petabytes of MongoDB data that’s completely unstructured and it’s just binaries, that’s enterprise data, but it’s not something that anyone is looking for.

So even though our original knee-jerk is to index everything, get everything to search, you want to able to search across everything. But you also have to take it with a grain of salt. A new content source could be hundreds or thousands of results that could potentially clutter the accuracy of results. Sometimes, it’s actually knowing when not to search something.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Thursday, December 1, 2016

HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

A leaner, more streamlined Hewlett Packard Enterprise (HPE) advanced across several fronts at HPE Discover 2016 in London, making inroads into hybrid IT, Internet of Things (IoT), and on to the latest advances in memory-based computer architecture. All the innovations are designed to help customers address the age of digital disruption with speed, agility, and efficiency.

Addressing a Discover audience for the first time since HPE announced spinning off many software lines to Micro Focus, Meg Whitman, HPE President and CEO, said that company is not only committed to those assets, becoming a major owner of Micro Focus in the deal, but building its software investments.
HPE is not getting out of software but doubling-down on the software that powers the apps and data workloads of hybrid IT.

"HPE is not getting out of software but doubling-down on the software that powers the apps and data workloads of hybrid IT," she said Tuesday at London's ExCel exhibit center.

"Massive compute resources need to be brought to the edge, powering the Internet of Things (IoT). ... We are in a world now where everything computes, and that changes everything," said Whitman, who has now been at the helm of HPE and HP for five years.

HPE's new vision: To be the leading provider of hybrid IT, to run today's data centers, and then bridge the move to multi-cloud and empower the intelligent edge, said Whitman. "Our goal is to make hybrid IT simple and to harness the intelligent edge for real-time decisions" to allow enterprises of all kinds to win in the marketplace, she said.

Hyper-converged systems

To that aim, the company this week announced an extension of HPE Synergy's fully programmable infrastructure to HPE's multi-cloud platform and hyper-converged systems, enabling IT operators to deliver software-defined infrastructure as quickly as customers' businesses demand. The new solutions include:
  • HPE Synergy with HPE Helion CloudSystem 10 -- This brings full composability across compute, storage and fabric to HPE's OpenStack technology-based hybrid cloud platform to enable customers to run bare metal, virtualized, containerized and cloud-native applications on a single infrastructure and dynamically compose and recompose resources for unmatched agility and efficiency.
  • HPE Hyper Converged Operating Environment -- The software update leverages composable technologies to deliver new capabilities to the HPE Hyper Converged 380, including new workspace controls that allow IT managers to compose and recompose virtualized resources for different lines of business, making it easier and more efficient for IT to act as an internal service provider to their organization.
    This year's HPE Discover was strong on showcasing the ecosystem approach to creating and maintaining hybrid IT.
This move delivers a full-purpose composable infrastructure platform, treating infrastructure as code, enabling developers to accelerate application delivery, says HPE. HPE Synergy has nearly 100 early access customers across a variety of industries, and is now broadly available. [Disclosure: HPE is a sponsor of BriefingsDirect podcasts.]

This year's HPE Discover was strong on showcasing the ecosystem approach to creating and maintaining hybrid IT. Heavy hitters from Microsoft Azure, Arista, and Docker joined Whitman on stage to show their allegiance to HPE's offerings -- along with their own -- as essential ingredients to Platform 3.0 efficiency.

See more on my HPE Discover analysis on The Cube.

HPE also announced plans to expand Cloud28+, an open community of commercial and public sector organizations with the common goal of removing barriers to cloud adoption. Supported by HPE's channel program, Cloud28+ unites service providers, solution providers, ISVs, system integrators, and government entities to share knowledge, resources and services aimed at helping customers build and consume the right mix of cloud solutions for their needs.

Internet of Things

Discover 2016 also saw new innovations designed to help organizations rapidly, securely, and cost-effectively deploy IoT devices in wide area, enterprise and industrial deployments. These solutions include:
"Cost-prohibitive economics and the lack of a holistic solution are key barriers for mass adoption of IoT," said Keerti Melkote, Senior Vice President and General Manager, HPE. "By approaching IoT with innovations to expand our comprehensive framework built on edge infrastructure solutions, software platforms, and technology ecosystem partners, HPE is addressing the cost, complexity and security concerns of organizations looking to enable a new class of services that will transform workplace and operational experiences."

As organizations integrate IoT into mainstream operations, the onboarding and management of IoT devices remains costly and inefficient particularly at large scale. Concurrently, the diverse variations of IoT connectivity, protocols and security, prevent organizations from easily aggregating data across a heterogeneous fabric of connected things.
The edge of the network is becoming a very crowded place, but these devices need to be made more useful.

To improve the economies of scale for massive IoT deployments over wide area networks, HPE announced the new HPE Mobile Virtual Network Enabler (MVNE) and enhancements to the HPE Universal IoT (UIoT) Platform.

As the amount of data generated from smart “things” grows and the frequency at which it is collected increases, so will the need for systems that can acquire and analyze the data in real-time. Real-time analysis is enabled through edge computing and the close convergence of data capture and control systems in the same box.

HPE Edgeline Converged Edge Systems converge real-time analog data acquisition with data center-level computing and manageability, all within the same rugged open standards chassis. Benefits include higher performance, lower energy, reduced space, and faster deployment times.

"The intelligent edge is the new frontier of the hybrid computing world," said Whitman. "The edge of the network is becoming a very crowded place, but these devices need to be made more useful."

This means that the equivalent of a big data crunching data center needs to be brought to the edge affordably.

Biggest of big data

"IoT is the biggest of big data," said Tom Bradicich, HPE Vice President and General Manager, Servers and IoT Systems. "HPE EdgeLine and [partner company] PTC help bridge the digital and physical worlds for IoT and augmented reality (AR) for fully automated assembly lines."

IoT and data analysis at the edge helps companies finally predict the future, head off failures and maintenance needs in advance. And the ROI on edge computing will be easy to prove when factory downtime can be greatly eliminated using IoT, data analysis and AR at the edge everywhere.

Along these lines, Citrix, together with HPE, has developed a new architecture around HPE Edgeline EL4000 with XenApp, XenDesktop and XenServer to allow graphically rich, high-performance applications to be deployed right at the edge.  They're now working together on next-generation IoT solutions that bring together the HPE Edge IT and Citrix Workspace IoT strategies.
I predict that HPC will be a big driver for HPE, both in private cloud implementations and in supporting technical differentiation for HPE customers and partners.

In related news, SUSE has entered into an agreement with HPE to acquire technology and talent that will expand SUSE's OpenStack infrastructure-as-a-service (IaaS) solution and accelerate SUSE's entry into the growing Cloud Foundry platform-as-a-service (PaaS) market.

The acquired OpenStack assets will be integrated into SUSE OpenStack Cloud, and the acquired Cloud Foundry and PaaS assets will enable SUSE to bring to market a certified, enterprise-ready SUSE Cloud Foundry PaaS solution for all customers and partners in the SUSE ecosystem.

As part of the transaction, HPE has named SUSE as its preferred open source partner for Linux, OpenStack IaaS, and Cloud Foundry PaaS.

#HPE also put force behind its drive to make high performance computing (HPC) a growing part of enterprise data centers and private clouds. Hot on the heels of buying SGI, HPE has recognized that public clouds leave little room for those workloads that do not perform best in virtual machines.

Indeed, if all companies buy their IT from public clouds, they have little performance advantage over one another. But many companies want to gain the best systems with the best performance for the workloads that give them advantage, and which run the most complex -- and perhaps value-creating -- applications. I predict that HPC will be a big driver for HPE, both in private cloud implementations and in supporting technical differentiation for HPE customers and partners.

Memory-driven computing

Computer architecture took a giant leap forward with the announcement that HPE has successfully demonstrated memory-driven computing, a concept that puts memory, not processing, at the center of the computing platform to realize performance and efficiency gains not possible today.

Developed as part of The Machine research program, HPE's proof-of-concept prototype represents a major milestone in the company's efforts to transform the fundamental architecture on which all computers have been built for the past 60 years.

Gartner predicts that by 2020, the number of connected devices will reach 20.8 billion and generate an unprecedented volume of data, which is growing at a faster rate than the ability to process, store, manage, and secure it with existing computing architectures.

"We have achieved a major milestone with The Machine research project -- one of the largest and most complex research projects in our company's history," said Antonio Neri, Executive Vice President and General Manager of the Enterprise Group at HPE. "With this prototype, we have demonstrated the potential of memory-driven computing and also opened the door to immediate innovation. Our customers and the industry as a whole can expect to benefit from these advancements as we continue our pursuit of game-changing technologies."
We have achieved a major milestone with The Machine research project -- one of the largest and most complex research projects in our company's history.

The proof-of-concept prototype, which was brought online in October, shows the fundamental building blocks of the new architecture working together, just as they had been designed by researchers at HPE and its research arm, Hewlett Packard Labs. HPE has demonstrated:
  • Compute nodes accessing a shared pool of fabric-attached memory
  • An optimized Linux-based operating system (OS) running on a customized system on a chip (SOC)
  • Photonics/Optical communication links, including the new X1 photonics module, are online and operational
  • New software programming tools designed to take advantage of abundant persistent memory.
During the design phase of the prototype, simulations predicted the speed of this architecture would improve current computing by multiple orders of magnitude. The company has run new software programming tools on existing products, illustrating improved execution speeds of up to 8,000 times on a variety of workloads. HPE expects to achieve similar results as it expands the capacity of the prototype with more nodes and memory.

In addition to bringing added capacity online, The Machine research project will increase focus on exascale computing. Exascale is a developing area of HPC that aims to create computers several orders of magnitude more powerful than any system online today. HPE's memory-driven computing architecture is incredibly scalable, from tiny IoT devices to the exascale, making it an ideal foundation for a wide range of emerging high-performance compute and data intensive workloads, including big data analytics.

Commercialization

HPE says it is committed to rapidly commercializing the technologies developed under The Machine research project into new and existing products. These technologies currently fall into four categories: Non-volatile memory, fabric (including photonics), ecosystem enablement and security.

Martin Banks, writing in Diginomica, questions whether these new technologies and new architectures represent a new beginning or a last hurrah for HPE. He poses the question to David Chalmers, HPE's Chief Technologist in EMEA, and Chalmers explains HPE's roadmap.

The conclusion? Banks feels that the in-memory architecture has the potential to be the next big step that IT takes. If all the pieces fall into place, Banks says, "There could soon be available a wide range of machines at price points that make fast, high-throughput systems the next obvious choice. . . . this could be the foundation for a whole range of new software innovations."

Storage initiative

HPE lastly announced a new initiative to address demand for flexible storage consumption models, accelerate all-flash data center adoption, assure the right level of resiliency, and help customers transform to a hybrid IT infrastructure.

Over the past several years, the industry has seen flash storage rapidly evolve from niche application performance accelerator to the default media for critical workloads. During this time, HPE's 3PAR StoreServ Storage platform has emerged as a leader in all-flash array market share growth, performance, and economics. The new HPE 3PAR Flash Now initiative gives customers a way to acquire this leading all-flash technology on-premises starting at $0.03 per usable Gigabyte per month, a fraction of the cost of public cloud solutions.
This keynote address and the news makes more sense as pertains to current and future IT market than I’ve ever seen.

"Capitalizing on digital disruption requires that customers be able to flexibly consume new technologies," said Bill Philbin, vice president and general manager, Storage, Hewlett Packard Enterprise. "Helping customers benefit from both technology and consumption flexibility is at the heart of HPE's innovation agenda."

Whitman's HPE, given all of the news at HPE Discover, has assembled the right business path to place HPE and its ecoystems of partners and alliances squarely the very center of the major IT trends of the next five years.

Indeed, I’ve been at HPE Discover conferences for more than 10 years now, and this keynote address and the news makes more sense as pertains to current and future IT market than I’ve ever seen.

You may also be interested in: