Tuesday, September 19, 2017

How Nokia refactors the video delivery business with new time-managed IT financing models

The next BriefingsDirect IT financing and technology acquisition strategies interview examines how Nokia is refactoring the video delivery business. Learn both about new video delivery architectures and the creative ways media companies are paying for the technology that supports them.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe new models of Internet Protocol (IP) video and time-managed IT financing is Paul Larbey, Head of the Video Business Unit at Nokia, based in Cambridge, UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems that the video-delivery business is in upheaval. How are video delivery trends coming together to make it necessary for rethinking architectures? How are pricing models and business models changing, too? 

Larbey: We sit here in 2017, but let’s look back 10 years to 2007. There were a couple key events in 2007 that dramatically shaped how we all consume video today and how, as a company, we use technology to go to market.

Larbey
It’s been 10 years since the creation of the Apple iPhone. The iPhone sparked whole new device-types, moving eventually into the iPad. Not only that, Apple underneath developed a lot of technology in terms of how you stream video, how you protect video over IP, and the technology underneath that, which we still use today. Not only did they create a new device-type and avenue for us to watch video, they also created new underlying protocols.

It was also 10 years ago that Netflix began to first offer a video streaming service. So if you look back, I see one year in which how we all consume our video today was dramatically changed by a couple of events.

If we fast-forward, and look to where that goes to in the future, there are two trends we see today that will create challenges tomorrow. Video has become truly mobile. When we talk about mobile video, we mean watching some films on our iPad or on our iPhone -- so not on a big TV screen, that is what most people mean by mobile video today.


The future is personalized 

When you can take your video with you, you want to take all your content with you. You can’t do that today. That has to happen in the future. When you are on an airplane, you can’t take your content with you. You need connectivity to extend so that you can take your content with you no matter where you are.

Take the simple example of a driverless car. Now, you are driving along and you are watching the satellite-navigation feed, watching the traffic, and keeping the kids quiet in the back. When driverless cars come, what you are going to be doing? You are still going to be keeping the kids quiet, but there is a void, a space that needs to be filled with activity, and clearly extending the content into the car is the natural next step.

And the final challenge is around personalization. TV will become a lot more personalized. Today we all get the same user experience. If we are all on the same service provider, it looks the same -- it’s the same color, it’s the same grid. There is no reason why that should all be the same. There is no reason why my kids shouldn’t have a different user interface.

There is no reason why I should have 10 pages of channels that I have to through to find something that I want to watch.
The user interface presented to me in the morning may be different than the user interface presented to me in the evening. There is no reason why I should have 10 pages of channels that I have to go through to find something that I want to watch. Why aren’t all those channels specifically curated for me? That’s what we mean by personalization. So if you put those all together and extrapolate those 10 years into the future, then 2027 will be a very different place for video.

Gardner: It sounds like a few things need to change between the original content’s location and those mobile screens and those customized user scenarios you just described. What underlying architecture needs to change in order to get us to 2027 safely?

Larbey: It’s a journey; this is not a step-change. This is something that’s going to happen gradually.

But if you step back and look at the fundamental changes -- all video will be streamed. Today, the majority of what we view is via broadcasting, from cable TV, or from a satellite. It’s a signal that’s going to everybody at the same time.

If you think about the mobile video concept, if you think about personalization, that is not going be the case. Today we watch a portion of our video streamed over IP. In the future, it will all be streamed over IP.

And that clearly creates challenges for operators in terms of how to architect the network, how to optimize the delivery, and how to recreate that broadcast experience using streaming video. This is where a lot of our innovation is focused today.

Gardner: You also mentioned in the case of an airplane, where it's not just streaming but also bringing a video object down to the device. What will be different in terms of the boundary between the stream and a download?

IT’s all about intelligence

Larbey: It’s all about intelligence. Firstly, connectivity has to extend and become really ubiquitous via technology such as 5G. The increase in fiber technology will dramatically enable truly ubiquitous connectivity, which we don’t really have today. That will resolve some of the problems, but not all.

But, by the fact that television will be personalized, the network will know what’s in my schedule. If I have an upcoming flight, machine learning can automatically predict what I’m going to do and make sure it suggests the right content in context. It may download the content because it knows I am going to be sitting in a flight for the next 12 hours.

Gardner: We are putting intelligence into the network to be beneficial to the user experience. But it sounds like it’s also going to give you the opportunity to be more efficient, with just-in-time utilization -- minimal viable streaming, if you will.

How does the network becoming more intelligent also benefit the carriers, the deliverers of the content, and even the content creators and owners? There must be an increased benefit for them on utility as well as in the user experience?

Larbey: Absolutely. We think everything moves into the network, and the intelligence becomes the network. So what does that do immediately? That means the operators don’t have to buy set-top boxes. They are expensive. They are very costly to maintain. They stay in the network a long time. They can have a much lighter client capability, which basically just renders the user interface.

The first obvious example of all this, that we are heavily focused on, is the storage. So taking the hard drive out of the set-top box and putting that data back into the network. Some huge deployments are going on at the moment in collaboration with Hewlett Packard Enterprise (HPE) using the HPE Apollo platform to deploy high-density storage systems that remove the need to ship a set-top box with a hard drive in it.

HPE Rethinks
And Use IT

Now, what are the advantages of that? Everybody thinks it’s costly, so you’ve taken the hard drive out, you have the storage in the network, and that’s clearly one element. But actually if you talk to any operator, their biggest cause of subscriber churn is when somebody’s set-top box fails and they lose their personalized recordings.

The personal connection you had with your service isn’t there any longer. It’s a lot easier to then look at competing services. So if that content is in the network, then clearly you don’t have that churn issue. Not only can you access your content from any mobile device, it’s protected and it will always be with you.

Taking the CDN private

Gardner: For the past few decades, part of the solution to this problem was to employ a content delivery network (CDN) and use that in a variety of ways. It started with web pages and the downloading of flat graphic files. Now that's extended into all sorts of objects and content. Are we going to do away with the CDN? Are we going to refactor it, is it going to evolve? How does that pan out over the next decade?

Larbey: The CDN will still exist. That still becomes the key way of optimizing video delivery -- but it changes. If you go back 10 years, the only CDNs available were CDNs in the Internet. So it was a shared service, you bought capacity on the shared service.

Even today that's how a lot of video from the content owners and broadcasters is streamed. For the past seven years, we have been taking that technology and deploying it in private network -- with both telcos and cable operators -- so they can have their own private CDN, and there are a lot of advantages to having your own private CDN.

You get complete control of the roadmap. You can start to introduce advanced features such as targeted ad insertion, blackout, and features like that to generate more revenue. You have complete control over the quality of experience, which you don't if you outsource to a shared service.
There are a lot of advantages to having your own private CDN. You have complete control over the quality of experience which you don't if you outsource to a shared service.

What we’re seeing now is both the programmers and broadcasters taking an interest in that private CDN because they want the control. Video is their business, so the quality they deliver is even more important to them. We’re seeing a lot of the programmers and broadcasters starting to look at adopting the private CDN model as well.

The challenge is how do you build that? You have to build for peak. Peak is generally driven by live sporting events and one-off news events. So that leaves you with a lot of capacity that’s sitting idle a lot of the time. With cloud and orchestration, we have solved that technically -- we can add servers in very quickly, we can take them out very quickly, react to the traffic demands and we can technically move things around.

But the commercial model has lagged behind. So we have been working with HPE Financial Services to understand how we can innovate on that commercial model as well and get that flexibility -- not just from an IT perspective, but also from a commercial perspective.

Gardner:  Tell me about Private CDN technology. Is that a Nokia product? Tell us about your business unit and the commercial models.

Larbey: We basically help as a business unit. Anyone who has content -- be that broadcasters or programmers – they pay the operators to stream the content over IP, and to launch new services. We have a product focused on video networking: How to optimize a video, how it’s delivered, how it’s streamed, and how it’s personalized.

It can be a private CDN product, which we have deployed for the last seven years, and we have a cloud digital video recorder (DVR) product, which is all about moving the storage capacity into the network. We also have a systems integration part, which brings a lot of technology together and allows operators to combine vendors and partners from the ecosystem into a complete end-to-end solution.

HPE Rethinks
And Use IT

Gardner: With HPE being a major supplier for a lot of the hardware and infrastructure, how does the new cost model change from the old model of pay up-front?

Flexible financial formats

Larbey: I would not classify HPE as a supplier; I think they are our partner. We work very closely together. We use HPE ProLiant DL380 Gen9 Servers, the HPE Apollo platform, and the HPE Moonshot platform, which are, as you know, world-leading compute-storage platforms that deliver these services cost-effectively. We have had a long-term technical relationship.

We are now moving toward how we advance the commercial relationship. We are working with the HPE Financial Services team to look at how we can get additional flexibility. There are a lot of pay-as-you-go-type financial IT models that have been in existence for some time -- but these don’t necessarily work for my applications from a financial perspective.
 Our goal is to use 100 percent of the storage all of the time to maximize the cache hit-rate.

In the private CDN and the video applications, our goal is to use 100 percent of the storage all of the time to maximize the cache hit-rate. With the traditional IT payment model for storage, my application fundamentally breaks that. So having a partner like HPE that was flexible and could understand the application is really important.

We also needed flexibility of compute scaling. We needed to be able to deploy for the peak, but not pay for that peak at all times. That’s easy from the software technology side, but we needed it from the commercial side as well.

And thirdly, we have been trying to enter a new market and be focused on the programmers and broadcasters, which is not our traditional segment. We have been deploying our CDN to the largest telcos and cable operators in the world, but now, selling to that programmers and broadcasters segment -- they are used to buying a service from the Internet and they work in a different way and they have different requirements.

So we needed a financial model that allowed us to address that, but also a partner who would take some of the risk, too, because we didn’t know if it was going to be successful. Thankfully it has, and we have grown incredibly well, but it was a risk at the start. Finding a partner like HPE Financial Services who could share some of that risk was really important. 

Gardner: These video delivery organizations are increasingly operating on subscription basis, so they would like to have their costs be incurred on a similar basis, so it all makes sense across the services ecosystem.
Our tolerance just doesn't exist anymore for buffering and we demand and expect the highest-quality video.

Larbey: Yes, absolutely. That is becoming more and more important. If you go back to the very first the Internet video, you watched of a cat falling off a chair on YouTube. It didn’t matter if it was buffering, that wasn't relevant. Now, our tolerance just doesn’t exist anymore for buffering and we demand and expect the highest-quality video.

If TV in 2027 is going to be purely IP, then clearly that has to deliver exactly the same quality of experience as the broadcasting technologies. And that creates challenges. The biggest obvious example is if you go to any IP TV operator and look at their streamed video channel that is live versus the one on broadcast, there is a big delay.

So there is a lag between the live event and what you are seeing on your IP stream, which is 30 to 40 seconds. If you are in an apartment block, watching a live sporting event, and your neighbor sees it 30 to 40 seconds before you, that creates a big issue. A lot of the innovations we’re now doing with streaming technologies are to deliver that same broadcast experience.

HPE Rethinks
And Use IT

Gardner: We now also have to think about 4K resolution, the intelligent edge, no latency, and all with managed costs. Fortunately at this time HPE is also working on a lot of edge technologies, like Edgeline and Universal IoT, and so forth. There’s a lot more technology being driven to the edge for storage, for large memory processing, and so forth. How are these advances affecting your organization? 

Optimal edge: functionality and storage

Larbey: There are two elements. The compute, the edge, is absolutely critical. We are going to move all the intelligence into the network, and clearly you need to reduce the latency, and you need to able to scale that functionality. This functionality was scaled in millions of households, and now it has to be done in the network. The only way you can effectively build the network to handle that scale is to put as much functionality as you can at the edge of the network.

The HPE platforms will allow you to deploy that computer storage deep into the network, and they are absolutely critical for our success. We will run our CDN, our ad insertion, and all that capability as deeply into the network as an operator wants to go -- and certainly the deeper, the better.

The other thing we try to optimize all of the time is storage. One of the challenges with network-based recording -- especially in the US due to the content-use regulations compliance -- is that you have to store a copy per user. If, for example, both of us record the same program, there are two versions of that program in the cloud. That’s clearly very inefficient.

The question is how do you optimize that, and also support just-in-time transcoding techniques that have been talked about for some time. That would create the right quality of bitrate on the fly, so you don’t have to store all the different formats. It would dramatically reduce storage costs.

The challenge has always been that the computing processing units (CPUs) needed to do that, and that’s where HPE and the Moonshot platform, which has great compute density, come in. We have the Intel media library for doing the transcoding. It’s a really nice storage platform. But we still wanted to get even more out of it, so at our Bell Labs research facility we developed a capability called skim storage, which for a slight increase in storage, allows us to double the number of transcodes we can do on a single CPU.

That approach takes a really, really efficient hardware platform with nice technology and doubles the density we can get from it -- and that’s a big change for the business case.

Gardner: It’s astonishing to think that that much encoding would need to happen on the fly for a mass market; that’s a tremendous amount of compute, and an intense compute requirement. 

Content popularity

Larbey: Absolutely, and you have to be intelligent about it. At the end of the day, human behavior works in our favor. If you look at most programs that people record, if they do not watch within the first seven days, they are probably not going to watch that recording. That content in particular then can be optimized from a storage perspective. You still need the ability to recreate it on the fly, but it improves the scale model.

Gardner: So the more intelligent you can be about what the users’ behavior and/or their use patterns, the more efficient you can be. Intelligence seems to be the real key here.

Larbey: Yes, we have a number of algorithms even within the CDN itself today that predict content popularity. We want to maximize the disk usage. We want the popular content on the disk, so what’s the point of us deleting a piece of a popular content just because a piece of long-tail content has been requested. We do a lot of algorithms looking at and trying to predict the content popularity so that we can make sure we are optimizing the hardware platform accordingly.

Gardner: Perhaps we can deepen our knowledge about this all through some examples. Do have some examples that demonstrate how your clients and customers are taking these new technologies and making better business decisions that help them in their cost structure -- but also deliver a far better user experience?

In-house control

Larbey: One of our largest customers is Liberty Global, with a large number of cable operators in a variety of countries across Europe. They were enhancing an IP service. They started with an Internet-based CDN and that’s how they were delivering their service. But recognizing the importance of gaining more control over costs and the quality experience, they wanted to take that in-house and put the content on a private CDN.

We worked with them to deliver that technology. One of things that they noticed very quickly, which I don’t think they were expecting, was a dramatic reduction in the number of people calling in to complain because the stream had stopped or buffered. They enjoyed a big decrease in call-center calls as soon as they switched on our new CDN technology, which is quite an interesting use-case benefit.

When they deployed a private CDN, they reached costs payback in less than 12 months.
We do a lot with Sky in the UK, which was also looking to migrate away from an Internet-based CDN service into something in-house so they could take more control over it and improve the users’ quality of experience. 

One of our customers in Canada, TELUS, when they deployed a private CDN, they reached costs payback in less than 12 months in terms of both the network savings and the Internet CDN costs savings.

Gardner: Before we close out, perhaps a look to the future and thinking about some of the requirements on business models as we leverage edge intelligence. What about personalization services, or even inserting ads in different ways? Can there be more of a two-way relationship, or a one-to-one interaction with the end consumers? What are the increased benefits from that high-performing, high-efficiency edge architecture? 

VR vision and beyond

Larbey: All of that generates more traffic -- moving from standard-definition to high-definition to 4K, to beyond 4K -- it all generates more network traffic. You then take into account a 360-degree-video capability and virtual reality (VR) services, which is a focus for Nokia with our Ozo camera, and it’s clear that the data is just going to explode.

So being able to optimize, and continue to optimize that, in terms of new codec technology and new streaming technologies -- to be able to constrain the growth of video demands on the network – is essential, otherwise the traffic would just explode.

There is lot of innovation going on to optimize the content experience. People may not want to watch all their TV through VR headsets. That may not become the way you want to watch the latest episode of Game of Thrones. However, maybe there will be a uniquely created piece of content that’s an add-on in 360, and the real serious fans can go and look for it. I think we will see new types of content being created to address these different use-cases.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tuesday, September 5, 2017

IoT capabilities open new doors for Miami telecoms platform provider Identidad IoT

The next BriefingsDirect Internet of Things (IoT) strategies insights interview focuses on how a Miami telecommunications products provider has developed new breeds of services to help manage complex edge and data scenarios.

We will now learn how IoT platforms and services help to improve network services, operations, and business goals -- for carriers and end users alike.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore what is needed to build an efficient IoT support business is Andres Sanchez, CEO of Identidad IoT in Miami. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How has your business changed in the telecoms support industry and why is IoT such a big opportunity for you?

Sanchez: With the new OTT (Over the Top content) technology, and the way that it came into the picture and took part of the whole communications chain of business, the business is basically getting very tough in telecoms. When we begin evaluating what IoT can do and seeing the possibilities, this is a new wave. We understand that it's not about connectivity, it's not about the 10 percent of the value chain -- it's more about the solutions.

Sanchez
We saw a very good opportunity to start something new and to take the experience we have with the technology that we have in telecoms, and get new people, get new developers, and start building solutions, and that's what we are doing right now.

Gardner: So as the voice telecoms business trails off, there is a new opportunity at the edge for data and networks to extend for a variety of use cases. What are some the use cases that you are seeing now in IoT that is a growth opportunity for your business?

Sanchez: IoT is everywhere. The beauty of IoT is that you can find solutions everywhere you look. What we have found is that when people think about IoT, they think about connected home, they think about connected car, or the smart parking where it's just a green or red light when the parking is occupied or not. But IoT is more than that.

There are two ways to generate revenue in IoT. One is by having new products. The second is understanding what it is on the operational level that we can do better. And it’s in this way that we are putting in sensors, measuring things, and analyzing things. You can basically reduce your operational cost, or be more effective in the way that you are doing business. It's not only getting the information, it's using that information to automate processes that it will make your company better.

Gardner: As organizations recognize that there are new technologies coming in that are enabling this smart edge, smart network, what is it that’s preventing them from being able to take advantage of this?

Manage your solutions
with the HPE

Sanchez: Companies think that they just have to connect the sensors, that they only have to digitize their information. They haven’t realized that they really have to go through a digital transformation. It's not about connecting the sensors that are already there; it's building a solution using that information. They have to reorganize and to reinvent their organizations.

For example, it's not about taking a sensor, putting the sensor in the machine and just start taking information and watching it on a screen. It’s taking the information and being able to see and check special patterns, to predict when a machine is going to break, when a machine at certain temperatures starts to work better or worse. It's being able to be more productive without having to do more work. It’s just letting the machines do the work by themselves.

Gardner: A big part of that is bringing more of an IT mentality to the edge, creating a standard network and standard platforms that can take advantage of the underlying technologies that are now off-the-shelf.

Sanchez: Definitely. The approach that Identidad IoT takes is we are not building solutions based on what we think is good for the customer. What we are doing is building proof of concepts (PoCs) and tailored solutions for companies that need digital transformation.

I don’t think there are two companies doing the same thing that have the same problems. One manufacturer may have one problem, and another manufacturer using the same technology has another completely different problem. So the approach we are taking is that we generate a PoC, check exactly what the problems are, and then develop that application and solution.
This is not just a change of process. This is not purely putting in new software. This is trying to solve a problem when you may not even know the problem is there. It's really digital transformation.

But it's important to understand that IoT is not an IT thing. When we go to a customer, we don’t just go to an IT person, we go to the CEO, because this is a change of mentality. This is not just a change of process. This is not purely putting in new software. This is trying to solve a problem when you may not even know the problem is there. It's really digital transformation.

Gardner: Where is this being successful? Where are you finding that people really understand it and are willing to take the leap, change their culture, rethink things to gain advantages?

One solution at a time

Sanchez: Unfortunately, people are afraid of what is coming, because people don't understand what IoT is, and everybody thinks it's really complicated. It does need expertise. It does need to have security -- that is a very big topic right now. But it's not impossible.

When we approach a company and that CEO, CIO or CTO understands that the benefits of IoT will be shown once you have that solution built -- and that probably the initial solution is not going to be the final solution, but it's going to be based on iterations -- that’s when it starts working.

If people think it’s just an out-of-the-box solution, it's not going to work. That's the challenge we are having right now. The opportunity is when the head of the company understands that they need to go through a digital transformation.

Manage your solutions
with the HPE

Gardner: When you work with a partner like Hewlett PackardEnterprise (HPE), they have made big investments and developments in edge computing, such as Universal IoT Platform and Edgeline Systems. How does that help you as a solutions provider make that difficult transition for your customers easier, and encourage them to understand that it's not impossible, that there are a lot of solutions already designed for their needs?

Sanchez: Our relationship with HPE has been a huge success for Identidad IoT. When we started looking at platforms, when we started this company, we couldn't find the right platform to fulfill our needs. We were looking for a platform that we could build solutions on and then extrapolate that data with other data, and build other solutions over those solutions.

When we approached HPE, we saw that they do have a unique platform that allows us to generate whatever applications, for whatever verticals, for whatever organizations – whether a city or company. Even if you wanted to create a product just for end-users, they have the ability to do it.

Also, it's a platform that is so robust that you know it’s going to work, it’s reliable, and it’s very secure. You can build security from the device right on up to the platform and the applications. Other platforms, they don't have that.

We think that IoT is about relationships and partnerships -- it's about an ecosystem.
Our business model correlates a lot with the HPE business model. We think that IoT is about relationships and partnerships -- it’s about an ecosystem. The approach that HPE has to IoT and to ecosystem is exactly the same approach that we have. They are building this big ecosystem of partners. They are helping each other to build relationships and in that way, they build a better and more robust platform.

Gardner: For companies and network providers looking to take advantage of IoT, what would you suggest that they do in preparation? Is there a typical on-ramp to an IoT project? 

A leap of faith

Sanchez: There's no time to be prepared right now. I think they have to take a leap of faith and start building the IoT applications. The pace of the technology transformation is incredible.

When you see the technology right now, today -- probably in four months it's going to be obsolete. You are going to have even better technology, a better sensor. So if you wait --most likely the competition is not going to wait and they will have a very big advantage.

Our approach at Identidad IoT is about platform-as-a-service (PaaS). We are helping companies take that leap without having to create very big financial struggles. And the companies will know that by our using the HPE platform, they are using the state-of-the-art platform. They are not using just a mom-and pop-platform built in a garage. It's a robust PaaS -- so why not to take that leap of faith and start building it? Now is the time.

Gardner: Once you pick up that success, perhaps via a PoC, that gives you ammunition to show economic and productivity benefits that then would lead to even more investment. It seems like there is a virtuous adoption cycle potential here.

Sanchez: Definitely! Once we start a new solution, usually the people who are seeing that solution, they start seeing things that they are not used to seeing. They can pinpoint problems that they have been having for years – but they didn't understand why.

For example, there's one manufacturer of T-shirts in Colombia. They were having issues with one specific machine. That machine used to break after two or three weeks. There was just this small piece that was broken. When we installed the sensor and we started gathering their information, after two or three breaks, we understood that it was not the amount of work -- it was the temperature at which the machine was working.

So what they did is once the temperature reached a certain point, we automatically started some fans to normalize the temperature, and then they haven't had any broken pieces for months. It was a simple solution, but it took a lot of study and gathering of information to be able to understand that break point -- and that's the beauty of IoT.

Gardner: It's data-driven, it's empirical, it’s understood, but you can't know what you don't know until you start measuring things, right?

Listen to things

Sanchez: Exactly! I always say that the “things” are trying to say something, and we are not listening. IoT enables the people, the companies, and the organization to start listening to the things, and not only to start listening, but to make the things to work for us. We need the applications to be able to trigger something to fix the problem without any human intervention -- and that's also the beauty of IoT.

Gardner: And that IoT philosophy even extends to healthcare, manufacturing, transportation, any place where you have complexity, it is pertinent.

Manage your solutions
with the HPE

Sanchez: Yes, the solution for IoT is everywhere. You can think about healthcare or tracking people or tracking guns or building solutions for cities in which the city can understand what is triggering certain pollution levels that they can fix. Or it can be in manufacturing, or even a small thing like finding your cellphone.

It’s everything that you can measure. Everything that you can put a sensor on, you can measure -- that's IoT. The idea is that IoT will help people live better lives without having to take care of the “thing;” things will have to take care of themselves.

Gardner: You seem quite confident that this is a growth industry. You are betting a significant amount of your future growth on it. How do you see it increasing over the next couple of years? Is this a modest change or do you really see some potential for a much larger market?

Once people understand the capability of IoT, there's going to be an explosion of solutions.
Sanchez: That's a really good question. I do see that IoT is the next wave of technology. There are several studies that say that by 2020 there are going to be 50 billion devices connected. I am not that futuristic, but I do see that IoT will start working now and probably within the next two or three years we are going to start seeing an incremental growth of the solutions. Once people understand the capability of IoT, there's going to be an explosion of solutions. And I think the moment to start doing it is now. I think that next year it’s going to be too late.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Wednesday, August 30, 2017

Inside story on developing the ultimate SDN-enabled hybrid cloud object storage environment

The next BriefingsDirect inside story interview explores how a software-defined data center (SDDC)-focused systems integrator developed an ultimate open-source object storage environment.

We’re now going to learn how Key Information Systems crafted a storage capability that may have broad extensibility into such realms as hybrid cloud and multi-cloud support. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to help us better understand a new approach to open-source object storage is Clayton Weise, Director of Cloud Services at Key Information Systems in Agoura Hills, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What prompted you to improve on the way that object storage is being offered as a service? How might this become a new business opportunity for you?

Weise: About a year ago, at Hewlett Packard Enterprise (HPE) Discover, I was wandering the event floor. We had just gotten out of a meeting with SwitchNAP, which is a major data center in Las Vegas. We had been talking to them about some preferred concepts and deployments for storage for their clients.
That discussion evolved into realizing that there are number of clients inside of Switch and their ecosystem that could make use of storage that was more locally based, that needed to be closer at hand. There were cost savings that could be gained if you have a connection within the same data center, or within the same fiber network.

Pulling data in and out of a cloud

Under this model, there would be significantly less expensive ways of pulling data in and out of a cloud, since you wouldn’t have transfer fees as you normally would. There would also be an advantage to privacy, and to cutting latency, and other beneficial things because of a private network all run by Switch and through their fiber network. So we looked at this and thought this might be interesting.

In discussions with the number of groups within HPE while wandering the floor at Discover, we found that there were some pretty interesting ways that we could play games with the network to allow clients to not have to uproot the way they do things, or force them to do things, for lack of a better term, “Our way.”  

If you go to Amazon Web Services or you go to Microsoft Azure, you do it the Microsoft way, or you do it the Amazon way. You don’t really have a choice, since you have to follow their guidelines.
They generally use object storage as an inexpensive way to store archival or less-frequently accessed data. Cloud storage became an alternative to tape and long-term storage. 

Where we saw value is, there are times in the mid-market space for clients -- ranging from a couple of hundred million dollars up to maybe a couple of billion dollars in annual revenue -- where they generally use object storage as kind of an inexpensive way to store archival, or less-frequently accessed, data. So [the cloud storage] became an alternative to tape and long-term storage.

We've had this massive explosion of unstructured data, files, and all sorts of things. We have a number of clients in medical and finance, and they have just seen this huge spike in data.

The challenge is: To deploy your own object storage is a fairly complex operation, and it requires a minimum number of petabytes to get started. In that mid-market, they are not typically measuring their storage in that petabytes level.

These customers are more typically in the tens to hundreds of terabytes range, and so they need an inexpensive way to offload that data and put it somewhere where it makes sense. In the medical industry particularly, there's a lot of concern about putting any kind of patient data up in a public cloud environment -- even with encryption.

We thought that if we are in the same data center, and it is a completely private operation that exists within these facilities, that will fulfill the total need -- and we can encrypt the data.

But we needed a way to support such private-cloud object storage that would be multitenant. Also, we just have had better luck working with open standards. The challenge with dealing with proprietary systems is you end up locked into a standard, and if you pick wrong, you find yourself having to reinvent everything later on.

I come from a networking background; I was an Internet plumber for many years. We saw the transition then on our side when routing protocols first got introduced. There were proprietary routing protocols, and there were open standards, and that’s what we still use today.

Transition to
HPE Data Center Networking

So we took a similar approach in object storage as a private-cloud service. We went down the open source path in terms of how we handled the provisioning. We needed something that integrated well with that. We needed a system that had the multitenancy, that understood the tenancy, and that is provided by OpenStack. We found a solution from HPE called Distributed Cloud Networking (DCN) that allows us to carve up the network in all sorts of interesting ways, and that way we don't have to dictate to the client how to run it.

Many clients are still running traditional networks. The adoption of Virtual Extensible LAN (VXLAN) and other types of SDDC within the network is still pretty low, especially in the mid-market space. So to go to a client and dictate that they have to change how they run the network it is not going to work.

And we wanted it to be as simple as possible. We wanted to treat this as much as we could as a flat network. By using a combination of DCN, Altoline switches from HPE, and some of other software, we were able to give clients a complete network carrying regular Virtual Local Area Networks (VLANs) across it. We then could tie this together in a hybrid fashion, whereby the customers can actually treat our cloud environment as a natural extension of their existing networks, of their existing data centers.

Gardner: You are calling this hybrid storage as a service. It’s focused on object storage at this point, and you can take this into different data center environments. What are some of the sweet spots in the market?
The object service becomes a very inexpensive way to store large amounts of data, and unlike tape -- with object as a service, everything is accessible easily. 

Weise: The areas where we are seeing the most interest have been backup and archive. It’s an alternative to tape. The object service becomes a very inexpensive way to store large amounts of data, and unlike tape -- where it's inconvenient to access the data -- with object as a service everything is accessible very, very easily.

For customers that cannot directly integrate into that object service as supported by their backup software, we can make use of object gateways to provide a method that's more like traditional access. It looks like a file, or file share, and you edit the file share to be written to the object storage, and so it acts as a go-between. For backup and archive, it makes a really, really great solution.

The other two areas where we seen the most interest have been in the medical space, specifically for large medical image files and archival. We’re working now specifically to build that type of solution, with HIPAA compliance. We have gone through the audits and compliance verification.

The second use-case has been in the media and entertainment industry. In fact, they are the very first to consume this new system and put in hundreds of terabytes worth of storage -- they are an entertainment industry client in Burbank, California. A lot of these guys are just shuffling along on external drives.

For them it’s often external arrays, and it's a lot more Mac OS users. They needed something that was better, and so hybrid object storage as a service has created a great opportunity for them and allows them to collaborate.

They have a location in Burbank, and then they brought up another office in the UK. There is yet another office for them coming up in Europe. The object storage approach allows a kind of central repository, an inexpensive place to place the data -- but it also allows them to be more collaborative as well.

Gardner: We have had a weak link in cloud computing storage, which has been the network -- and you solved some of those issues. You found a prime use-case with backup and archival, but it seems to me that given the storage capabilities that we've seen that this has extensibility. So where it might go next in terms of a storage-as-a service that hybrid cloud providers would use? Where can this go?

Carving up the network 

Weise: It’s an interesting question because one of the challenges we have all faced in the world of cloud is we have virtualized servers and virtualized storage, meaning there is disaggregation; there is a separation between the workload that’s running and the actual hardware it’s running on.

In many cases, and for almost all clients in the mid-market, that level of virtualization has not occurred at the network level. We are still nailed to things. We are all tied down to the cable, to the switch port, and to the human that can figure those things out. It’s not as flexible or as extensible as some of the other solutions that are out there.

In our case, when we build this out, the real magic is with the network. That improved connection might be a cost savings for a client -- especially from a bandwidth standpoint. But as you get a private cross-connect into that environment to make use of, in this case, storage as a service, we can now carve that up in a number of different ways and allow the client to use it for other things.

For example, if they want to have burst capability within the environments, they can have it -- and it’s on the same network as their existing system. So that’s where it gets really interesting: Instead of having to have complex virtual guest package configurations, and tiny networks, and dealing with some the routing of other pieces, you can literally treat our cloud environment as if it's a network cable thrown over the wall -- and it becomes just an extension of the existing network.

We can secure that traffic and ensure that there is high-performance, low-latency and complete separation of tenancy. If you have Coke and Pepsi as clients, they will never see each other.
That opens up some additional possibilities. Some things to work on eventually would be block storage, file storage, right there existing on the same network. We can secure that traffic and ensure that there is high-performance, low-latency and complete separation of tenancy. So if you have Coke and Pepsi as clients, they will never see each other.

Gardner: Very cool. You can take this object storage benefit -- and by the way, the cost of that can be significantly lower because you don’t have egress charges and some of the other unfriendly aspects of economics of public cloud providers. But you also have an avenue into a true hybrid cloud environment, where you can move data but also burst workloads and manage that accordingly. Now, what about making this work toward a multi-cloud capability?

Transition to
HPE Data Center Networking

Weise: Right. So this is where HPE’s DCN software-defined networking (SDN) really starts to shine and separates itself from the pack. We can tie environments together regardless of where they are. If there is a virtual endpoint or physical appliance; if it's at a remote location that can be deployed, which can act as a gateway -- that links everything together.

We can take a client network that's going from their environment into our environment, we can deploy a small virtual machine inside of a public cloud, and it will tie the networks together and allow them to treat it all as the same. The same policy enforcement engine and things that they use to segregate traffic in microsegmentation and service chaining can be done just as easily in the public cloud environment.

One of the reasons we went to Switch was because they have multiple locations. So in the case of our object storage, we deployed the objects across all three of their data center sites. So a single repository that’s written the data is distributed among three different regions. This protects against a possible regional outage that could mean data is inaccessible, and this is the kind of recent thing that we in the US have seen, where clients were down anywhere from 6 to 16 hours.

One big network, wherever you are

This eliminates that. But the nice thing is because of the network technology that theywere using from HPE, it allowed us to treat that all as one big network -- and we can carve that up and virtualize it. So clients inside of the data center -- maybe they need resources for disaster recovery or for additional backups or those things -- it's all part of that. We can tie-in from a network standpoint and regardless of where you want to exist -- if you are in Vegas, you may want to recover in Reno, or you may want to recover in Grand Rapids. We can make that network look exactly the same in your location.

You want to recover in AWS? You want to recover in Azure? We can tie it in that way, too. So it opens up these great possibilities that allows this true hybrid cloud -- and not as a completely separate entity.

Gardner: Very cool. Now there’s nothing wrong, of course, with Switch, but there are other fiber and data center folks out there. Some names that begin with “E” come to mind that you might want to drop in this and that should even increase the opportunity for distribution.

Weise: That’s right. So this initial deployment is focused on Switch, but we do a grand scheme to work this into other data centers. There are a handful of major data center operators out there, including the one that starts with an “E” along with another that starts with a “D.” We do have plans to expand this, or use this as a success use-case.

As this continues to grow, and we get some additional momentum and some good feedback, and really refine the offering to make sure we know exactly what everything needs to be, then we can work with those other data center providers.

Whenever clients deploy their workloads in those public clouds, that means there is equipment that has not been collocated inside one of your facilities.
From the data center operators’ perspective, if you're one of those facilities, you are at war with AWS or with Azure. Because whenever clients deploy their workloads in those public clouds, that means there is equipment that has not been collocated inside one of your facilities.

So they have a vested interest in doing this, and there is a benefit to the clients inside of those facilities too because they get to live inside of the ecosystem that exists within those data centers, and the private networks that they carry in there deliver the same benefits to all in that ecosystem.

We do plan to use this hybrid cloud object storage as a service capability as a model to deploy in several other data center environments. There is not only a private cloud, but also a multitenant private cloud that could be operative for clients that have a large enough need. You can talk about this in a multi-petabyte scale, or you talk about thousands of virtual machines. Then it's a question of should you do a private cloud deployment just for you? The same technology, fulfilling the same requirements, and the same solutions could still be used. 

Partners in time

Gardner: It sounds like it makes sense, on the back of a napkin basis, for you and HPE to get together and brand something along these lines and go to market together with it.

Weise: It certainly does. We've had some great discussions with them. Actually there is a group that was popular in Europe that is now starting to take its growth here in US called Cloud28+.

We had some great discussions with them. We are going to be joining that, and it’s a great thing as well.

The goal is building out this sort of partner network, and working with HPE to do that has been extremely supportive. In addition to these crazy ideas, I also have a really crazy timeline for deployment. When we initially met with HPE and talked about what we wanted to do, they estimated that I should reserve about 6 to 8 weeks for planning and then another 1.5 months for deployment.

Transition to
HPE Data Center Networking

I said, “Great we have 3 weeks to do the whole thing,” and everyone thought we were crazy. But we actually had it completed in a little over 2.5 weeks. So we have a huge amount of thanks to HPE, and to their technical services group who were able to assist us in getting this going extremely quickly.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.


You may also be interested in: