Thursday, February 26, 2015

RealTime Medicare Data delivers caregiver trends insights by taming its healthcare data

The next edition of the HP Discover Podcast Series highlights how RealTime Medicare Data analyzes huge volumes of Medicare data and provides analysis to their many customers on the caregiver side of the healthcare sector.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to explain how they manage such large data requirements for quality, speed, and volume, we're joined by Scott Hannon, CIO of RealTime Medicare Data and he's based in Birmingham, Alabama. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your organization and some of the major requirements you have from an IT perspective.

Hannon: RealTime Medicare Data has full census Medicare, which includes Part A and Part B, and we do analysis on this data. We provide reports that are in a web-based tool to our customers who are typically acute care organizations, such as hospitals. We also do have a product that provides analysis specific to physicians and their billing practices.

Gardner:  And, of course, Medicare is a very large US government program to provide health insurance to the elderly and other qualifying individuals.

Hannon: Yes, that’s true.

Gardner: So what sorts of data requirements have you had? Is this a volume, a velocity, a variety type of the problem, all the above?

Volume problem

Hannon: It’s been mostly a volume problem, because we're actually a very small company. There are only three of us in the IT department, but it was just me as the IT department, back when I started in 2007.

Hannon
At that time, we had one state, Alabama and then, we began to grow. We grew to seven states which was the South region: Florida, Georgia, Tennessee, Alabama, Louisiana, Arkansas, and Mississippi. We found that Microsoft SQL Server was not really going to handle the type of queries that we did with the volume of data.

Currently we have 18 states. We're loading about a terabyte of data per year, which is about 630 million claims and our database currently houses about 3.7 billion claims.

Gardner: That is some serious volume of data. From the analytics side, what sort of reporting do you do on that data, who gets it, and what are some of their requirements in terms of how they like to get strategic benefit from this analysis.

Hannon: Currently, most of our customers are general acute-care hospitals. We have a web-based tool that has reports in it. We provide reports that start at the physician level. We have reports that start at the provider level. We have reports that you can look at by state.
This allows them to look not only at themselves, but to compare themselves to other places, like their market, the region, and the state.

The other great thing about our product is that typically providers have data on themselves, but they can't really compare themselves to the providers in their market or state or region. So this allows them to look not only at themselves, but to compare themselves to other places, like their market, the region, and the state.

Gardner: I should think that’s hugely important, given that Medicare is a very large portion of funding for many of these organizations in terms of their revenue. Knowing what the market does and how they compare to it is essential.

Hannon: Typically, for a hospital, about 40 to 45 percent of their revenue depends on Medicare. The other thing that we've found is that most physicians don't change how they practice medicine based on whether it’s a Medicare patient, a Blue Cross patient, or whoever their private insurance is.

So the insights that they gain by looking at our reports are pretty much 90 to 95 percent of how their business is going to be running.

Gardner: It's definitely mission-critical data then. So you started with a relational database, using standard off-the-shelf products. You grew rapidly, and your volume issues grew. Tell us what the problems were and what requirements you had that led you to seek an alternative.

Exponential increase

Hannon: There were a couple of problems. One, obviously, was the volume. We found that we had to increase the indexes exponentially, because we're talking about 95 percent reads here on the database. As I said, the Microsoft SQL Server really was not able to handle that volume as we expanded.

The first thing we tried was to move to an analysis services back end. For that project, we got an outside party to help us because we would need to redesign our front end completely to be able to query analysis services.

It just so happened that that project was taking way too long to implement. I started looking at other alternatives and, just by pure research, I happened to find Vertica. I was reading about it and thought "I'm not sure how this is even possible." It didn’t even seem possible to be able to do this with this amount of data.

So we got a trial of it. I started using it and was impressed that it actually could do what it said it could do.
and gain access to the
FREE HP Vertica Community Edition
Gardner: As I understand it, Vertica has the column store architecture. Was that something understood? What is it about the difference of the Vertica approach to data -- one that perhaps caught your attention at first, and how has that worked out for you?

Hannon: To me the biggest advantages were the fact that it uses the standard SQL query language, so I wouldn't have to learn the MDX, which is required with the analysis services. I don’t understand the complete technical details about column storage, but I understand that it's much faster and that it doesn't have to look at every single row. It can build the actual data set much faster, which gives you much better performance on the front end.

Gardner: And what sort of performance have you had?

Hannon: Typically we have seen about a tenfold decrease in actual query performance time. Before, when we would run reports, it would take about 20 minutes. Now, they take roughly two minutes. We're very happy about that.

Gardner: How long has it been since you implemented HP Vertica and what are some of supporting infrastructures that you've relied on?

Hannon: We implemented Vertica back in 2010. We ended up still utilizing the Microsoft SQL Server as a querying agent, because it was much easier to continue to interface the SQL reporting services, which is what our web-based product uses. And the stored procedure functionality that was in there and also the open query feature.

So we just pull the data directly from Vertica and then send it through Microsoft SQL Server to the reporting services engine.

New tools

Gardner: I've heard from many organizations that not only has this been a speed and volume issue, but there's been an ability to bring new tools to the process. Have you changed any of the tooling that you've used for analysis? How have you gone about creating your custom reports?

Hannon: We really haven't changed the reports themselves. It's just that I know when I design a query to pull a specific set of data that I don’t have to worry that it's going to take me 20 minutes to get some data back. I'm not saying that in Vertica every query is 30 seconds, but the majority of the queries that I do use don’t take that long to bring the data back. It’s much improved over the previous solution that we were using.

Gardner: Are there any other quality issues, other than just raw speeds and feeds issues, that you've encountered? What are some of the paybacks you've gotten as a result of this architecture?
But I will tell people to not be afraid of Linux, because Vertica runs on Linux and it’s easy.

Hannon: First of all, I want to say that I didn’t have a lot of experience with Unix or Linux on the back end and I was a little bit rusty on what experience I did have. But I will tell people to not be afraid of Linux, because Vertica runs on Linux and it’s easy. Most of the time, I don’t even have to mess with it.

So now that that's out of the way, some of the biggest advantages of Vertica is the fact that you can expand to multiple nodes to handle the load if you've got a larger client base. It’s very simple. You basically just install commodity hardware, but whatever flavor of Unix or Linux that you prefer, as long as it’s compatible, the installation does all the rest for you, as long as you tell it you're doing multiple nodes.

The other thing is the fact that you have multiple nodes that allow for fault tolerance. That was something that we really didn't have with our previous solution. Now we have fault tolerance and load balancing.

Gardner: Any lessons learned, as you made this transition from a SQL database to a Vertica columnar store database? You even moved the platform from Windows to Linux. What might you tell others who are pursuing a shift in their data strategy because they're heading somewhere else?

Jump right in

Hannon: As I said before, don’t be afraid of Linux. If you're a Microsoft or a Mac shop, just don’t be afraid to jump in. Go get the free community edition or talk to a salesperson and try it out. You won't be disappointed. Since the time we started using it, they have made multiple improvements to the product.

The other thing that I learned was that with OPENQUERY, there are specific ways that you have to write the store procedures. I like to call it "single-quote hell," because when you write OPENQUERY and you have to quote something, there are a lot of other additional single quotes that you have put in there. I learned that there was a second way of doing it that lessened that impact.

Gardner: Okay, good. And we're here at HP Discover. What's interesting for you to learn here at the show and how does that align with what your next steps are in your evolution?

Hannon:  I'm definitely interested in seeing all the other capabilities that Vertica has and seeing how other people are using it in their industry and for their customers.
I'm definitely interested in seeing all the other capabilities that Vertica has and seeing how other people are using it in their industry and for their customers.

Gardner: In terms of your deployment, are you strictly on-premises for the foreseeable future? Do you have any interest in pursuing a hybrid or cloud-based deployments for any of your data services?

Hannon: We actually use a private cloud, which is hosted at TekLinks in Birmingham. We've been that way ever since we started, and that seems to work well for us, because we basically just rent rack space and provide our own equipment. They have the battery backup, power backup generators, and cooling.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, February 24, 2015

Columbia Sportswear sets torrid pace for reaping global business benefits from software-defined data center

The next BriefingsDirect innovator case study interview shines a light on how on Columbia Sportswear has made a successful journey to software-defined data center (SDDC).

Through our panel discussion at the recent VMworld 2014 Conference in San Francisco, we explore how retailer Columbia Sportswear has made great strides in improving their business results through modernized IT, and where they expect to go next with their software-defined strategy.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the new wave of IT, we sat down with Suzan Pickett, Manager of Global Infrastructure Services at Columbia Sportswear in Portland, Oregon; Tim Melvin, Director of Global Technology Infrastructure at Columbia, and Carlos Tronco, Lead Systems Engineer at Columbia Sportswear. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: People are familiar with your brand, but they might not be familiar with your global breadth. Tell us a little bit about the company, so we appreciate the task ahead of you as IT practitioners.

Pickett: Columbia Sportswear is in its 75th year. We're a leader in global manufacturing of apparel, outdoor accessories, and equipment. We're distributed worldwide and we have infrastructure in 46 locations around the world that we manage today. We're very happy to say that we're 100 percent virtualized on VMware products.

Pickett
Gardner: And those 46 locations, those aren't your retail outlets. That's just the infrastructure that supports your retail. Is that correct?

Pickett: Exactly, our retail footprint in North America is around 110 retail stores today. We're looking to expand that with our joint venture in China over the next few years with Swire, distributor of Columbia Sportswear products.

Gardner: You're clearly a fast-growing organization, and retail itself is a fast-changing industry. There’s lots going on, lots of data to crunch -- gaining more inference about buyer preferences --  and bringing that back into a feedback loop. It’s a very exciting time.

Tell me about the business requirements that you've had that have led you to reinvest and re-energize IT. What are the business issues that are behind that?

Global transformation

Pickett: Columbia Sportswear has been going through a global business transformation. We've been refreshing our enterprise resource planning (ERP). We had a green-field implementation of SAP. We just went live with North America in April of this year, and it was a very successful go-live. We're 100 percent virtualized on VMware products and we're looking to expand that into Asia and Europe as well.

So, with our global business transformation, also comes our consumer experience, on the retail side as well as wholesale. IT is looking to deliver service to the business, so they can become more agile and focused on engineering better products and better design and get that out to the consumer.

Gardner: To be clear, your retail efforts are not just brick and mortar. You're also doing it online and perhaps even now extending into the mobile tier. Any business requirements there that have changed your challenges?

Pickett: Absolutely. We're really pleased to announce, as of summer 2014, that Columbia Sportswear is an AirWatch customer as well. So we get to expand our end-user computing and our VMWare Horizon footprint as well as some of our SDDC strategies.

We're looking at expanding not only our e-commerce and brick-and-mortar, but being able to deliver more mobile platform-agnostic solutions for Columbia Sportswear, and extend that out to not only Columbia employees, but our consumer experience.

Gardner: Let’s hear from Tim about your data center requirements. How does what Suzan told us about your business challenges translate into IT challenges?

https://www.linkedin.com/pub/tim-melvin/1/654/609
Melvin
Melvin: With our business changing and growing as quickly as it is, and with us doing business and selling directly to consumers in more than 100 countries around the world, our data centers have to be adaptable. Our data and our applications have to be secure and available, no matter where we are in the world, whether you're on network or off-premises.

The SDDC has been a game-changer for us. It’s allowed to take those technologies, host them where we need them, and with whatever cost configuration makes sense, whether it’s in the cloud or on-premises, and deliver the solutions that our business needs.

Gardner: Let's do a quick fact-check in terms of where you are in this journey to SDDC. It includes a lot. There are management aspects, network aspects, software-defined storage, and then of course mobile. Does anybody want to give me the report card on where you are in terms of this journey?

100 percent virtualized

Pickett: We're 100 percent virtualized with our compute workloads today. We also have our storage well-defined with virtualized storage. We're working on an early adoption proof of concept (POC) with VMware's NSX for software-defined networking.

It really fills our next step into defining our SDDC, being able to leverage all of our virtual workloads, being able to extend that into the vCloud Air hybrid cloud, and being able to burst our workloads to expand our data centers our toolsets. So we're looking forward to our next step of our journey, which is software-defined networking via NSX.

Gardner: Taking that network plunge, what about the public-cloud options for your hybrid cloud? Do you use multiple public clouds, and what's behind your choice on which public clouds to use?

Melvin: When you look at infrastructure and the choice between on-premise solutions, hybrid clouds, public and private clouds, I don't think it's a choice necessarily of which answer you choose. There isn't one right answer. What’s important for infrastructure professionals is to understand the whole portfolio and understand where to apply your high-power, on-premises equipment and where to use your lower-cost public cloud, because there are trade-offs in each case.
What’s important for infrastructure professionals is to understand the whole portfolio and understand where to apply your high-power, on-premises equipment and where to use your lower-cost public cloud

When we look at our workloads, we try to present the correct tool for the correct job. For instance, for our completely virtualized SAP environment we run that on internal, on-premises equipment. We start to talk about development in a sandbox, and those cases are probably best served in a public cloud, as long as we can secure and automate, just like we can on-site.

Gardner: As you're progressing through SDDC and you're exploring these different options and what works best both technically and economically in a hybrid cloud environment, what are you doing in terms of your data lifecycle. Is there a disaster recovery (DR) element to this? Are you doing warehousing in a different way and disturbing that, or are you centralizing it? I know that analysis of data is super important for retail organizations. Any thoughts about that data component on this overall architecture?

Pickett: Data is really becoming a primary concern for Columbia Sportswear, especially as we get into more analytical situations. Today, we have our two primary data centers in North America, which we do protect with VMWare’s vCenter Site Recovery Manager (SRM), a very robust DR solution.

We're very excited to work with an enterprise-class cloud like vCloud Air that has not only the services that we need to host our systems, but also DR as a service, which we're very interested in pursuing, especially around our remote branch office scenarios. In some of those remote countries, we don't have that protection today, and it will give a little more business continuity or disaster avoidance, as needed.

As we look at data in our data centers, our primary data centers with big data, if you will, and/or enterprise data warehouse strategies, we've started looking at how we're replicating the data where that data lives. We've started getting into active data center scenarios -- active, active.

We're really excited around some of the announcements we've heard recently at VMworld around virtual volumes (VVOLs) and where that’s going to take us in the next couple of years, specifically around vMotion over long-distance. Hopefully, we'll follow the sun, and maybe five years from now, we'll able to move our workloads from North America to Asia and be able to take those workloads and have them follow where the people are using them.

Geographic element

Gardner: That’s really interesting about that geographic element if you're a global company. I haven't heard that from too many other organizations. That’s an interesting concept about moving data and workloads around the world throughout the day.

We've seen some recent VMware news around different types of cloud data offerings, Cloud Object Store for example, and moving to a virtual private cloud on demand. Where do you see the next challenge is in terms of your organization and how do you feel that VMware is setting a goal post for you?
vCloud Air, being an enterprise-class offering, gives us the management capability and allows us to use the same tools that we would use on site.

Tronco: The vCloud Air offerings that we've heard so much about are an exciting innovation.

Public clouds have been available for a long time. There are a lot of places where they make sense, but vCloud Air, being an enterprise-class offering, gives us the management capability and allows us to use the same tools that we would use on-site.

It gives us the control that we need in order to provide a consistent experience to our end-users. I think there is a lot of power there, a lot of capability, and I'm really excited to see where that goes.

Gardner: How about some of the automation issues with the vRealize Suite, such Air Automation. Where do you see the component of managing all this? It becomes more complex when you go hybrid. It becomes, in one sense, more standardized and automated when you go software-defined, but you also have to have your hands on the dials and be able to move things.

https://www.linkedin.com/in/ctronco
Tronco
Tronco: One of the things that we really like about vCloud Air is the fact that we'll be able to use the same tools on-premises and off-premises, and won't have to switch between tools or dashboards. We can manage that infrastructure, whether it's on-premise or in the public cloud, will be able to leverage the efficiencies we have on-premise in vCloud Air as well.

We also can take advantage of some of those new services, like ObjectStore, that might be coming down the road, or even continuous integration (CI) as a service for some of our development teams as we start to get more into a DevOps world.

Customer reactions

Gardner: Let’s tie this back to the business. It's one thing to have a smooth-running, agile IT infrastructure machine. It's great to have an architecture that you feel is ready to take on your tasks, but how do you translate that back to the business? What does it get for you in business terms, and how are you seeing reactions from your business customers?

Pickett: We're really excited to be partnering with the business today. As IT comes out from underground a little bit and starts working more with the business and understanding their requirements -- especially with tools like VMware vRealize Automation, part of the vCloud Suite -- we're now partnering with our development teams to become more agile and help them deliver faster services to the business.

We're working on one of our e-commerce order confirmation toolsets with vRealize Automation, part of the vCloud Suite, and their ability to now package and replicate the work that they're doing rather than reinventing the wheel every time we build out an environment or they need to do a test or a development script.

By partnering with them and enabling them to be more agile, IT wins. We become more services-oriented. Our development teams are winning, because they're delivering faster to the business and the business wins, because now they're able to focus more on the core strategies for Columbia Sportswear.

Gardner: Do you have any examples that you can point to where there's been a time-to-market benefit, a time-to-value faster upgrade of an application, or even a data service that illustrates what you've been able to deliver as a result of your modernization?
Our development teams are winning, because they're delivering faster to the business and the business wins, because now they're able to focus more on the core strategies.

Pickett: Just going back to the toolset that I just mentioned. That was an upgrade process, and we took that opportunity to sit down with our development team and start socializing some of the ideas around VMware vRealize Automation and vCloud Air and being able to extend some of our services to them.

At the same time, our e-commerce teams are going through an upgrade process. So rather than taking weeks or months to deliver this technology to them, we were able to sit down, start working through the process, automate some of those services that they're doing, and start delivering. So, we started with development, worked through the process, and now we have quality assurance and staging and we're delivering product. All this is happening within a week.

So we're really delivering and we're being more agile and more flexible. That’s a very good use case for us internally from an IT standpoint. It's a big win for us, and now we're going to take it the next time we go through an upgrade process.

We've had this big win and now we're going to be looking at other technologies -- Java, .NET, or other solutions -- so that we can deliver and continue the success story that we're having with the business. This is the start of something pretty amazing, bringing development and infrastructure together and mobilizing what Columbia Sportswear is doing internally.

Gardner: Of course, we call it SDDC, but it leads to a much more comprehensive integrated IT function, as you say, extending from development, test, build, operations, cloud, and then sourcing things as required for a data warehouse and applications sets. So finally, in IT, after 30 or 40 years, we really have a unified vision, if you will.

Any thoughts, Tim, on where that unification will lead to even more benefits? Are there ancillary benefits from a virtuous adoption cycle that come to mind from that more holistic whole-greater-than-the-sum-of-the-parts IT approach?

Flexibility and power

Melvin: The closer we get to a complete software-defined infrastructure, the more flexibility and power we have to remove the manual components, the things that we all do a little differently and we can't do consistently.

We have a chance to automate more. We have the chance to provide integrations into other tools, which is actually a big part of why we chose VMware as our platform. They allow such open integration with partners that, as we start to move our workloads more actively into the cloud, we know that we won't get stuck with a particular product or a particular configuration.

The openness will allow us to adapt and change, and that’s just something you don't get with hardware. If it's software-defined, it means that you can control it and you can morph your infrastructure in order to meet your needs, rather than needing to re-buy every time something changes with the business.

Gardner: Of course, we think about not just technology, but people and process. How has all of this impacted your internal IT organization? Are you, in effect, moving people around, changing organizational charts, perhaps getting people doing things that they enjoy more than those manual tasks? Carlos, any thought about the internal impact of this on your human resources issues?

Tronco: Organizationally, we haven’t changed much, but the use of some thing like vRealize Automation allows us to let development teams do some of those tasks that they used to require us to do.

Now, we can do it in an automated fashion. We get consistency. We get the security that we need. We get the audit trail. But we don’t have to have somebody around on a Saturday for two minutes of work spread across eight hours. It also lets those application teams be more agile and do things when they're ready to do them.
We can all leverage the tools and configurations. That's really powerful.

Having that time free lets us do a better job with engineering, look down the road better with a little more clarity, maybe try some other things, and have more time to look at different options for the next thing down the road.

Melvin: Another point there is that, in a fully software-defined infrastructure, while it may not directly translate into organizational changes, it allows you to break down silos. Today, we have operations, system storage, and database teams working together on a common platform that they're all familiar with and they all understand.

We can all leverage the tools and configurations. That's really powerful. When you don't have the network guys sitting off doing things different from what the server guys are doing, you can focus more on comprehensive solutions, and that extends right into the development space, as Carlos mentioned. The next step is to work just as closely with our developers as we do with our peers and infrastructure.

Gardner: It sounds as if you're now also in a position to be more fleet. We all have higher expectations as consumers. When I go to a website or use an application, I expect that I'll see the product that I want, that I can order it, that it gets paid for, and then track it. There is a higher expectation from consumers now.

Is that part of your business payback that you tie into IT? Is there some way that we can define the relationship between that user experience for speed and what you're able to do from a software-defined perspective?

Preventing 'black ops'

Pickett: As an internal service provider for Columbia Sportswear, we can do it better, faster, and cheaper on-premise and with our toolsets from our partners at VMware. This helps prevent black ops situations, for example, where someone is going out to another cloud provider outside the parameters and guidelines from IT.

Today, we're partnering with the business. We're delivering that service. We're doing it at the speed of thought. We're not in a position where we're saying "no," "not yet," or "maybe in a couple of weeks," but "Yes, we can do that for you." So it's a very exciting position to be in that if someone comes to us or if we're reaching out, having conversations about tools, features, or functionality, we're getting a lot of momentum around utilizing those toolsets and then being able to expand our services to the business.

Tronco: Using those tools also allows us to turn around things faster within our development teams, to iterate faster, or to try and experiment on things without a lot of work on our part. They can try some of it, and if it doesn’t work, they can just tear it down.
Today, we're partnering with the business. We're delivering that service. We're doing it at the speed of thought.

Gardner: So you've gone through this journey and you're going to be plunging in deeper with software-defined networking. You have some early-adopter chops here. You guys have been bold and brave.

What advice might you offer to some other organizations that are looking at their data-center architecture and strategy, thinking about the benefits of hybrid cloud, software-defined, and maybe trying to figure out in which order to go about it?

Pickett: I'd recommend that, if you haven’t virtualized your workloads -- to get them virtualized. We're in that no-limit situation. There are no longer restrictions or boundaries around virtualizing your mission-critical or your tier-one workloads. Get it done, so you can start leveraging the portability and the flexibility of that.

Start looking at the next steps, which will be automation, orchestration, provisioning, service catalogs, and extending that into a hybrid-cloud situation, so that you can focus more on what your core offerings are going to be your core strategies. And not necessarily offload, but take advantage of some of those capabilities that you can get in VMware vCloud Air for example, so that you can focus on really more of what’s core to your business.

Gardner: Tim, any words of advice from your perspective?

Melvin: When it comes to solutions in IT, the important thing is to find the value and tie it back to the business. So look for those problems that your business has today, whether it's reducing capital expense through heavy virtualization, whether it's improving security within the data center through NSX and micro-segmentation, or whether it's just providing more flexible infrastructure for your temporary environments like SAN and software development through the cloud.

Find those opportunities and tie it back to a value that the business understands. It’s important to do something with software-defined data centers. It's not a trend and it's not really even a question anymore. It's where we're going. So get moving down that path in whatever way you need to in order to get started. And find those partners, like VMware, that will support you and build those relationships and just get moving.

20/20 hindsight

Gardner: Carlos, advice, thoughts about 20/20 hindsight?

Tronco: As Suzan said, it's focusing on virtualizing the workloads and then being able to leverage some of those other tools like vRealize Automation. Then you're able to free staff up to pursue activities and add more value to the environment and the business, because you're not doing repeatable things manually. You'll get more consistency now that people have time. They're not down because they're doing all these day two, day three operations and things that wear and grate on you.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Monday, February 23, 2015

How Tunisian IT service provider Tunisie uses cloud for improved IT service management capabilities

The next edition of the HP Discover Podcast Series explores how a Tunisian IT services provider improves their IT service management (ITSM) offerings and capabilities leveraging cloud-based services.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about better IT control and efficiency using the latest ITSM tools and services, we are joined by Fadoua Ouerdiane, IT Projects Director at SMS and Tunisie Electronique in Tunis, Tunisia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us a little bit about Tunisie Electronique.

Ouerdiane: Tunisie Electronique is a systems integrator for multiple vendors, including HP, for more than 40 years. We serve customers of different sizes covering almost all possible sectors. 

Gardner: Tell us a little bit about the challenges that you're facing. What problems are you trying to solve for your customers?

Continuous development

Ouerdiane: Support activity is the pillar of our company. We're in a continuous development process to fulfill our customer expectations. As a solution integrator for HP and others, we are the first interface toward our end customers.

Ouerdiane
We're asked to be as reactive as possible to all kind of customer requests: incidents, claims, and services support. The number of such requests is getting higher on a daily basis.

Gardner: There are an awful lot of IT challenges nowadays. People are doing more on mobile devices. They're doing more services from the cloud. Things are changing very rapidly. Therefore, they also have higher expectations about speed for solutions. Tell us about what you're putting in place in order to better serve these very complex needs.

Ouerdiane: Knowing how to manage those requests, consolidating, delegating to relevant resources, escalating, following up, and making sure that service-level agreements (SLAs) are respected are all crucial for our support department.
Achieve a more relevant help desk
Register here
To access valuable resources from HP
In the past, we have tried to manage those needs using in-house development tools and then open-source solutions. However, in each case, we were first confronted by different limitations. Finally, we decided to use the Service Anywhere solution from HP in the software-as-a-service (SaaS) mode and install it in a cloud environment.

Gardner: Why has the cloud environment delivery model been so important? What are the benefits for you in going to cloud rather than on-premises?

Ouerdiane: Our motivation to make the choice was that Service Anywhere is not only offering functionality that perfectly matches our needs, but it has more advantages. The first is easy deployment. My team made the solution available in less than one month. No extra infrastructure is needed. That means no administration efforts, especially with high availability. This will help us to reduce costs effectively.
It's bringing a lot of added value for the support team as well as our end customers.

Gardner: Do you have any sense of what this brings? What do you get in return for this in terms of metrics of success and business benefits? How have you been able to measure how well this is performing for you?

Ouerdiane: Today, using HP Service Anywhere, the support department is much better managed. It's bringing a lot of added value for the support team as well as our end customers.

Information is systematically shared with the relevant persons, thanks to the Service Anywhere notification functionality. There's better access using any device, at anytime, from anywhere; better tracking of each incident or support request. The main benefit is the improved customer satisfaction that we felt and experienced.

Customer reaction

Gardner: Have you gotten any feedback? Do you have examples of what people tell you they like about it? How are your customers actually reacting to this new approach?

Ouerdiane: The customer no longer needs to send emails, to make calls, to get updated about the status and progress of its requests. Reports and dashboards are provided on a regular basis. Customer satisfaction is our main target and daily concern. Service Anywhere is bringing us closer.
Achieve a more relevant help desk
Register here
To access valuable resources from HP
Gardner: What do you think you will be doing in the future to provide even better IT services and support?

Ouerdiane: The next HP ITSM cloud release will be available soon with very important features, such as a multi-tenant feature, which we need. We'll work on the platform to add more content, to add all our customers’ content and support contacts.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, February 19, 2015

Kony executive Burley Kawasaki on best tips for attaining speed in enterprise mobile apps delivery

The next BriefingsDirect enterprise mobile strategy discussion comes to you directly from the Kony World 2015 Conference on Feb. 4 in Orlando.

This five-part series of penetrating discussions on the latest in enterprise mobility explores advancements in applications design and deployment technologies across the full spectrum of edge devices and operating environments.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

For our next interview we welcome Burley Kawasaki, Senior Vice President of Products at Kony, Inc. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Before we explore the Kony World news, what's going on in the enterprise mobility marketplace? What are enterprises looking for in their mobility strategy?

Kawasaki: Obviously, mobility has proven that it’s not just a passing fad. It's really evolved over the last four or five-plus years. Initially, most companies were just trying to get one or two apps out in the public app store.

Kawasaki
Many started with some type of branded consumer apps, what are called business-to-consumer (B2C) applications, and they were willing to make the investments to make it have a fantastic user experience. They would try to make this a way for customers to experience and engage the brand. A lot of times you saw this being built and launched by the marketing organization inside an enterprise.

Now, what we're seeing is a shift. As people are looking for the next set of ways to exploit mobility, they're looking internal, inside their enterprise. They're looking at what I refer to as or business-to-employee (B2E) applications.

But instead of one or two apps, there are literally dozens or hundreds of mobilized processes and applications that most larger enterprises are looking to build as they start looking at all the internal processes. It could be mobilizing sales or employees, looking at support out in the field with field technicians, or providing self-service access to vacation requests.

There are a number of challenges this creates. One is lack of skills. If you're building one or two, you can probably muster the technical expertise or you can outsource and hire an agency or someone to build it. If you're looking to supply dozens -- some larger enterprises are looking at hundreds of internal-facing mobile apps -- that really highlights the imbalance between the demand from the business stakeholders and the supply of IT skills, resources, and technical talent.

Build applications quickly

Kony, since day one, has focused on how to drive faster and faster acceleration of the full development process. That's part of our core value proposition of rapidly delivering great mobile apps by providing tools and platforms to help build applications more quickly.

When we talk about building anything custom, there is a certain amount of time, typically three to six months that you spend, not just for the development, but to map out the requirements to do all the testing and final deployment. And with any custom software development, you can only compress it so far, and there's a certain amount of skills and expertise that you need.

To answer your question, we think that there needs to be other types of models for ultimately creating these internal mobile applications. The trend that you're starting to see, and that we believe is really going to take off, is a move away from custom, bespoke development of each and every app, to much more of an assembly and configuration model.

If you look at building a home, for example, there was a time where you had to custom build all of the parts to your home. You would go out, cut down the trees, and do everything from scratch, but that was a hugely inefficient process.

Now, essentially, homes are componentized. You can find standard sizes lumber parts. Large parts of your home may be prefabricated and it's just a matter of assembling and configuring them to meet your needs.
Many industries have realized the benefits of moving to assembly and configuration, as opposed to custom built.

We've seen the same assembly across a number of industries, like the auto industry. Many industries have realized the benefits of moving to assembly and configuration, as opposed to custom built.

We're seeing this in software as well. There was a day where everyone used to build their own enterprise resource planning (ERP) system or their own sales automation system. Now, people have moved to the configuration of packaged software. Mobile applications are now at the tipping point where they need to have a different way that will address the explosion in demand that I was describing.

There are a couple of things that we think are required to create this new model. One is that you need to have an ecosystem that provides pre-built components. Obviously, you can't assemble things if there is nothing to assemble from. So there needs to be an ecosystem of components.

Then, there needs to be some type of tooling that allows you to assemble the components without having to be a developer, but more of a visual drag and drop type of composition experience.

And then once you have done that, it can't just be a pretty picture. It needs to actually somehow run and make its way down to your phone or to your device. So there has to be some type of execution or dynamic run capability behind the description of what you have created.

Those are the three requirements. Of course, we have just announced this week some software that addresses each of those categories.

Major announcements

Gardner: Well, let's delve into them a little bit. There were three major announcements around your Marketplace, your Modeler, and also an example of how these come together in your first prepackaged application called the Kony Sales App.

Kawasaki: I'll talk about each of these. I'll start with the Marketplace. As I said, to make this practical and useful for our customers, we need to be able to create a way to find and discover pre-built components. Some of these components Kony may build ourselves, but we're also working with a number of very talented leading edge partners of ours -- independent software vendors (ISVs) and systems integrators (SI’s), who are also contributing prebuilt components.

This week, Feb. 4, we launched Marketplace. If you go out to community.kony.com/marketplace, you can browse. We're adding partners on an ongoing basis, but you'll see some of the early solutions that are available in the Marketplace. That’s the first part of the announcement.

The second piece is around how to assemble these into an actual application, a new product called Kony Modeler. Unlike some of our prior products, our developer tools, these do not require development backgrounds.

The typical profile of a user of Kony Modeler would be either a business analyst or someone closer to the business who knows how to drag and drop, to define what the end-user experience should be for your mobile app, knows how to describe the process or the workflow that has to occur, knows how to take those forms that they've painted, and be able to map it to some backend business data, coming from a system like SAP or Salesforce.
Unlike some of our prior products, our developer tools, these do not require development backgrounds.

As long as you can do that, you don’t have to be a developer and drop into code. You can describe this visually. You can drag and drop. Then, when you're done, the important thing is that it’s not just a picture that you print out and you throw over the wall to your developer. This description of your application then gets pushed out instantaneously to our cloud run time.

We've extended our backend-as-a-service, what we call Kony MobileFabric, so that it takes this model, this description of the mobile app, and will download it to your device and run it. Then, the next time as an end-user, if you are using one of these apps, you just automatically get whatever changes or updates have been made. You don’t have to go out to an app store and find a new app. It just automatically is part of your app.

As an analogy, in the same way if I use any software-as-a-service (SaaS) software, I won't have to install a new app on my laptop. I just go out to my web browser, and next time I log in, it's always up-to-date.

Gardner: It sounds as if this has some of the greater elements of platform-as-a-service (PaaS), but the tooling is designed for that business-analyst level. It also gives you some of those benefits of rapid iterations. You can change and adjust. You can customize to different types of user within the group that you're targeting. And all of this, I assume, is at also low cost, given that it's a SaaS based approach. Tell us a little bit about why this is like PaaS, but PaaS-plus.

Non-developer experience

Kawasaki: PaaS typically has been targeted primarily toward developers. And it’s maybe a higher level productivity for developers, but you still have to write code against software development kits (SDKs) or other application programming interfaces (APIs). Kony Modeler provides a non-developer experience.

The other big thing, and you pointed it out, is that it really does lower all of the infrastructure, hardware, and software costs that are required, because it’s purely cloud-based. It makes it not only lower cost from a total cost of ownership (TCO) standpoint, but it also accelerates the whole development cycle.

I think about this as a shift away from a classic waterfall-type model, to much more of an agile model. In the old model, you spend three to six months trying to go through and nail the requirements and hand it off to your dev team. Then, they go off, and you find out, only when it's in final QA, that it doesn't look right on the device, or it comes back and the business has changed their mind. That never happens, right?
It makes it not only lower cost from a total cost of ownership (TCO) standpoint, but it also accelerates the whole development cycle.

Modeler allows you to very quickly iterate a working application to release in a matter of days and be able to do testing with your end-users. Based on their feedback, I can make updates on an agile basis and continuously iterate on functionality or enhancements to the application.

Gardner: Burley, it also sounds like you're able to bring A/B testing type activities to a different class of user, where you don't always know what your requirements are precisely, but you can throw things on the wall, try them out, see what works, and iterate on that. I don’t recall too much of that capability being available to a business analyst type of user.

Kawasaki: You're correct. Usually, there is this very extended process, where a business analyst has to document everything in some thick specification, and even if you have it wrong or you are uncertain, whatever you communicate out to the dev team is what they go off and build.

So it’s not that this does away with requirements, but it does allow more flexibility to change or to test. And I'd agree. I think the responsiveness will allow much more experimentation and innovation. It's better to fail fast. If you have tried something out and it's not delivering the results, you haven't invested a huge amount of time and cost to learn that.

Gardner: And another appealing aspect of this for IT and operations is that this isn't shadow IT. This is under the auspices of IT. They can bring in governance. They can audit as necessary and make sure the right backend sources are being accessed in the right way, with the right privilege and access controls. They can monitor security. We talked about how it's better than PaaS, but it's also better than shadow IT for a lot of reasons.

Lack of skills

Kawasaki: It is. We were talking earlier about the skills shortage, and if you look at the stats or the data, most industry analysts predict that up to 60 percent or 70 percent or more of mobile development is outsourced today, to either an interactive agency, a systems integrator, or someone else, because of lack of skills.

So it has been outsourced to some third party, and who knows what technologies they are using to build the app. It's outside the typical controls or governance of IT. So it's not only shadow; it's dark matter. You don't even know it exists; it’s completely hidden.

Yet, at some point, inevitably, those apps that you may have outsourced for your first version, it’s not just a first version release. You want to update it sometimes monthly. So it has to come back into IT at some point, for no other reason than it's connecting and talking to enterprise data in the back end. It's connecting to other IT controlled systems, and so there is a huge amount of risk and costs associated if these things are completely hidden off the grid.

Gardner: Let's take this from the abstract to the concrete. We actually have an application now in play called the Kony Sales App. Who is that targeted to, how does it work, and what do you expect to be some of the proof point metrics of this in usage compared to how organizations conduct themselves with customer relationship management (CRM), especially if there is multiple CRMs in play in an organization?

Kawasaki: That's a great point. First of all, this is the first of a series of what we call ready-to-run applications. And the reason we call it ready-to-run is that it's a packaged app. This isn't a custom or bespoke app, but it's pre-connected and pre-integrated to the common back end, in case of CRM what most companies are using, something like Salesforce or SAP on the back end.
So we've taken a task-oriented approach and created a modular micro app approach that really is meant to be very easy and engaging for the end-user.

So it comes ready to run, but like packaged software or SaaS software, it allows you the ability to configure and customize it, because everyone’s sales processes or their user base is going to be different. That's where the Modeler tool allows you to configure it.

So when you purchase Kony Sales, you get not only the application, but the use of Kony Modeler to be able to customize and configure it. And then, as you make changes, you push it live, and again, it deploys using the SaaS model you were describing.

To talk a little bit more about Kony Sales, we think it's a new style of mobile apps, what I will refer to as a micro app. Historically, people thought of CRM software, and I am overgeneralizing, but as big, somewhat monolithic, applications.

One of the historical challenges with CRM usage is that you had to bring your laptop with you, and sales reps are notorious at not completing data in a timely fashion. It takes a lot of mandates, top-down from the sales leadership, to get data into the system so you can get accurate reporting. It's one of the age-old problems.

We believe that if instead of trying to get the whole CRM application crammed down onto a four-inch screen, with all the complexity that it requires, you target very specific action-oriented micro apps that a sales rep can do very quickly on the go, that doesn't take a lot of training, and doesn't take a lot of thought. They can very quickly look up and see their accounts, or they can very quickly log a call they have made.

So we've taken a task-oriented approach and created a modular micro app approach that really is meant to be very easy and engaging for the end-user, which in this case is a sales rep.

User experience

Gardner: And again, for the understanding of how this all works across multiple endpoints, regardless of what your sales force is using for their mobile device, this is going to come down. They are going to get that user experience and that interface that the craftsmen behind the app demanded and designed.

Kawasaki: That's right. Kony Sales is multi-channel. It works across phones, tablets, iOS, Android, and importantly, it does not replace your existing CRM data. It extends the CRM systems you already have, but makes them much, much easier to very quickly get access to.

Also using Mobile First types of approaches, and by that I mean if you are a sales rep, very likely you are on the road or in an airplane. How many people have tried to use whatever CRM client, even some of the web mobile experiences, to get data into Salesforce or SAP? It's all web-based, HTML5-based, and it doesn’t work if you're not online.

One of the things we designed in from day one was that you have to be able to operate in an "occasionally connected" mode. So if you are offline, either because you're out in the field talking to your customer, or you're in an airplane you can still have the same easy access. Then, when you're connected again, it will synchronize and handle updating SAP or Salesforce in the background.

Gardner: Now that we have the model of the Modeler, the Marketplace and these ready-to-run apps, what comes next -- more apps, bigger marketplace, or is there another technology shoe to drop?

Kawasaki: It's more apps certainly, and not just from Kony, but from our partners. When we did some of our initial planning and research, the most commonly mobilized processes were ones that were customer facing or customer impacting, just because of the benefits and the ROI.

So we started with sales. We're going to release our next one, which will be around field service. It really helps engage at the point that you're supporting and serving your customer.
It really helps engage at the point that you're supporting and serving your customer.

There are a set of these that we are working on, but I think also importantly, we're working on really making our partner ecosystem trained, ready to use Modeler, and to build very unique and differentiated applications to publish to the marketplace.

We have a couple of examples of these ready-to-run apps that are compelling from our partners that you will hear more about, and that list will continue to grow over the coming weeks and months.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

 You may also be interested in:

Wednesday, February 18, 2015

Mexican ISP Telum gains operational advantages via better monitoring across vast network elements

The next edition of the HP Discover Podcast Series delves into how Telum in Northeast Mexico improves their ISP services delivery reliability through quality assurance and higher availability using advanced monitoring software.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how they have matured their process, technology and IT culture, we are joined by Max Garza O'Ward, Head of IT Operations at Telum, an ISP based in Monterrey, Mexico. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why are reliability and high performance are so important to your business?

O'Ward
O'Ward: It’s a very competitive business for the telecommunications industry. We're not the top dog in Mexico or Northeast Mexico. Everything that we put into our customer effort is to bring in customers that we need to keep. So reliability is a key part of our commitment to our customers.

Gardner: And you're not just using technology. You're delivering technology. So it seems essential to have a handle on what you have, what it's doing, and maybe even get out in front and have predictive capabilities when problems might arise.

O'Ward: That’s very true. Prediction is where we need to focus. To ensure good services we need to make sure that all those systems are up and running, and we use software to precisely do that.

Gardner: For our listeners who might not be familiar with your organization, tell us about your size, how many subscribers, how many services. Just give us a description of your organization, both in terms of the breadth of services and the size of your audience?

O'Ward: We're part of the Northeast Mexico market, basically Monterrey, which is the biggest city up north. We have three different customer markets. The residential market is roughly over 500,000 customers. We have small business or SOHO businesses, with between 3,000 and 6,000 customers. And we also have a large enterprise market, around 1,500 large enterprises.

Unwieldy network

Gardner: Let’s dig a little bit into the problems that you face. Several years ago, you were looking at a network that was perhaps a bit unwieldy, maybe not well-defined. You had some difficulties predicting how certain things that you did on your network would impact your customers. Perhaps you can walk us through your problem set, your challenges, and then how you started to solve them.

O'Ward: That’s a very good approach. In terms of network, we basically started noticing that we were committing a lot of unplanned outages, or unplanned downtimes. So we started to reinforce our monitoring solutions based on HP software. We came in and provided better solutions to what we were looking into as a network element point of view.

Based on that, we refurbished our inventory and made sure that all of our network elements were replaced promptly, based on events. So prediction was key to our better service-level agreement (SLA) offerings.

Gardner: Max, was this a function of changing just the software or was there a cultural component to this? Did you have to change the way you were thinking about monitoring and quality assurance in addition to employing some new technology?

O'Ward: Yes, it was a cultural change. As a matter of fact, just two years ago, we revamped the way the operations department is composed. So a big gap that was closed because of culture. Culture needed to be changed.
What secrets will your data tell you?
In the HP Toolkit for Operations Analytics
Previously, we had all these disparate teams working on only their solution. Once we came under one head of operations, we decided that service was the only thing that matters. So we bridged that gap and now we have all these cross-functional teams working for the same response, which is service offerings.
Now, it's a combination of culture, teamwork, and understanding where the failures are.

Gardner: So IT service management (ITSM) has led to the ability to maintain your quality and performance. Are there any indicators of how much -- perhaps the number of failures from one period to more recent failures?

O'Ward: There are a lot of numbers. I will give you top figures. IT is the department that I head, and most of these departments are based on different engineering groups.

When we started working toward service and focusing only on services, on video services, for example, we had over 10 percent failures globally, not every month, but throughout the year. Once we got under this new management and using our new HP tools, we have been bringing that number down consistently.

Now, it's a combination of culture, teamwork, and understanding where the failures are. Sometimes software tells us where the problem is and sometimes software is needed to understand where the problem is.

In this particular case, we soon understood what the problem was and we decided to change out equipment that was failing via either obsolescence or just a defective part.

Transparency and visibility

Gardner: In addition to changing culture, putting in some better processes and better tools, it seems to me that for a lot of companies that I speak to, a lot of the process involves getting to know yourself better, providing transparency and visibility.

Then, it's dashboarding that information so that people can access it, regardless of whether it’s firefighting or just ongoing maintenance. Tell me about this journey from having a lot of elements, perhaps not always visible, to getting this new-found ability to have greater inventory control.

O'Ward: To start off, transparency is key. Once you have an approach of letting the upper management know where your failures are, that creates concern. And in order for us to create business, we need to have a reputation to uphold.

We started with monitoring basic monitoring elements. We created awareness of where our failures were, and at the same time, we asked for more budget to focus on all these defective parts.
What secrets will your data tell you?
In the HP Toolkit for Operations Analytics
That, in turn, made management very aware of what the engineering departments were actually doing -- either as an IT department or as engineering by itself, which is basically hardware.

Once we had all these components, and they were publicly scrutinized or they were publicly shown in a quarterly meeting, that helped create a dashboard. Now, dashboards are really fun if you really know what you're talking about, but if you give upper management the wrong information, wrong decisions are going to be made. So that’s key.

We're working on creating a huge dashboard. Maybe this year is going to be the year. We have the elements and we're providing that information for the dashboard to be built, but we are waiting to do the next step.

Right now, we're focused on getting the elements straightened out, monitoring all of our key systems, and we have done just that in the last year. So we've upheld our end of the bargain, which is service, quality, and capacity. The next step is going to be providing automatic dashboards. Right now, dashboards are manual.

Gardner: So the good news is that you're getting much more reliable information about what's going on. The bad news is, is you have got a whole lot of information coming in.

O'Ward: That's correct.

Aligning data

Gardner: Big data is a big topic here at HP Discover. What are your thoughts about how to get that data, be it structured or unstructured, into an alignment so that you can improve on your situation, know more about it, get better predictions, and better analysis? I suppose the capstone for this is how important will big data become for you to maintain and improve on your reliability over time.

O'Ward: Big data is a big name, it’s a big trend, and everybody is talking about it. A lot of people, especially people who aren't technology-oriented, talk about it as if if they know it. The way big data is coming into our shop is focused more on customers.

If we're talking about big data, unstructured data, that’s coming in from our traps or alerts and stuff like that. Yes, we need to go into that particular scenario. We're looking at two different projects.

We're going to look into a big-data project that actually brings capacity and quality for our services. At the same time, there's going to be another effort from big data that is a customer-facing effort. So yes, it’s going to be a reality in the next year.
The software is awesome, just great software, and if you have the right people and the right potential, that software can bring you very good benefits.

Gardner: So it’s safe to say that big data is going to have an impact on your IT operations, but perhaps also in your marketing, to understand what’s going on in the field very quickly and then be able to react to it. Big data sort of ties together business and technology.

O'Ward: That's correct. That’s the way we're looking at it. As I said, there are two different teams of people working on it. We're going to be working on the operations part first and then at the marketing part as well.

Gardner: We're here at the beginning of HP Discover. Are there any things in particular that you're going to be out there looking for in terms of how to accomplish your goals over the next several years. What would you like to see HP doing?

O’Ward: Very much what they have been doing in the past. The software is awesome, just great software, and if you have the right people and the right potential, that software can bring you very good benefits.

Our head of operations for the whole company is here with us this week. I'm going to make sure he attends all these meeting in which we can talk about big data and how we can mold out all of the strengths and all of the key performance indicators (KPIs) that he needs. I hope that HP continues to be an innovative software company. I have really enjoyed working with them for the last five or six years.
What secrets will your data tell you?
In the HP Toolkit for Operations Analytics
Gardner: Okay, last question. Going from a failure rate of 5-10 percent down to less than 1 percent is enviable. A lot of people want to make those kinds of strides. Now that you have had experience in doing this, do you have any 20/20 hindsight? What would you suggest to other organizations that are also trying to get a better handle on their systems and their network, get to know their inventory, and gain visibility? What have you learned that you might share now that you have been through it?
O'Ward: It doesn't matter how much we monitor things or how many green lights or red lights we see on any given dashboard. If we're not focused on business processes and business outcomes, this isn't going to work.

My take would be to focus on a business process that you actually know it's critical and start from that. Go top-down from that. That would be the best approach. It's worked for us. It actually bridges a gap between management and the engineering departments. It also provided us with sound budgeting information. Once you understand what the problem really is, it gets approved easier.

So look at business processes first, get to know your business outcomes, and work on that toward your infrastructure.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: