Monday, July 22, 2013

HP Vertica architecture gives massive performance boost to toughest BI queries for Infinity Insurance

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series highlights how Infinity Insurance Companies in Birmingham, Alabama has been deploying a new data architecture -- native column store databases -- to improve productivity for their analysis and business intelligence (BI) queries.

To learn more about how Infinity has improved their performance and their results for their business analytics, BriefingsDirect interviewed Barry Ralston, Assistant Vice President for Data Management at Infinity Insurance Companies. The discussion, which took place at the recent HP Discover 2013 Conference in Las Vegas, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Learn more about the upcoming Vertica conference in Boston Aug. 5.]

Among other findings, Ralston and his team has seen a 100 times improvement in their top 12 worst-performing queries or longest-running queries when moving from a row-store-based Oracle Exadata implementation to a column store-based HP Vertica deployment. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: What was it that you've been doing with your BI and data warehousing that prompted you to seek an alternative?

Ralston: Like many companies, we have constructed an enterprise data warehouse deployed to a row-store technology. In our case, it was initially Oracle RAC and then, eventually, the Oracle Exadata engineered hardware/software appliance.

Ralston
We were noticing that analysis that typically occurs in our space wasn’t really optimized for execution via that row store. Based on my experience with Vertica, we did a proof of concept with a couple of other alternative and analytic store-type databases. We specifically chose Vertica to achieve higher productivity and to allow us to focus on optimizing queries and extracting value out of the data.

Gardner: What does Infinity Insurance Companies do? How big are you, and how important is data and analysis to you?

Ralston: We are billion-dollar property and casualty company, headquartered in Birmingham, Alabama. Like any insurance carrier, data is key to what we do. But one of the things that drew me to Infinity, after years of being in a consulting role, was the idea of their determination to use data as a strategic weapon, not just IT as a whole, but data specifically within that larger IT as a strategic or competitive advantage.

Vertica environment

Gardner: You have quite a bit of internal and structured data. Tell me a bit what happened when you moved into a Vertica environment, first in the proof of concept phase and then into production?

Ralston: For the proof of concept, we took the most difficult or worst-performing queries from our Exadata implementation and moved that entire enterprise data warehouse set into a Vertica deployment on three Dual Hex Core, DL380 type machines. We're running at the same scale, with the same data, with the same queries.

We took the top 12 worst-performing queries or longest-running queries from the Exadata implementation, and not one of the proof of concept queries ran less than 100 times faster. It was an easy decision to make in terms of the analytic workload, versus trying to use the Oracle row-store technology.

Gardner: Let’s dig into that a bit. I'm not a computer scientist and I don’t claim to fully understand the difference between row store, relational, and the column-based approach for Vertica. Give us the quick "Data Architecture 101" explanation of why this improvement is so impressive? [Learn more about the upcoming Vertica conference in Boston Aug. 5.]

Ralston: The original family of relational databases -- the current big three are  Oracle, SQL Server and DB2 -- are based on what we call row-storage technologies. They store information in blocks on disks, writing an entire row at a time.

If you had a record for an insured, you might have the insured's name, the date the policy went into effect, the date the policy next shows a payment, etc. All those attributes were written all at the same time in series to a row, which is combined into a block.
It’s an optimal way of storing data for transaction processing.

So storage has to be allocated in a particular fashion, to facilitate things like updates. It’s an optimal way of storing data for transaction processing. For now, it’s probably the state-of-the-art for that. If I am running an accounting system or a quote system, that’s the way to go.

Analytic queries are fundamentally different than transaction-processing queries. Think of the transaction processing as a cash register. You ring up a sale with a series of line items. Those get written to that row store database and that works well.

But when I want to know the top 10 products sold to my most profitable 20 percent of customers in a certain set of regions in the country, those set-based queries don’t perform well without major indexing. Often, that relates back to additional physical storage in a row-storage architecture.

Column store databases -- Vertica is a native column store database -- store data fundamentally differently than those row stores. We might break down a record into an entire set of columns or store distinctly. This allows me to do a couple of different things from an architectural level.

Sort, compress, organize

First and foremost, I can sort, compress, and organize the data on disk much more efficiently. Compression has been recently added to row-storage architectures, but in a row-storage database, you largely have to compress at the entirety of a row.

I can’t choose an optimal compression algorithm for just a date, because in that row, I will have text, numbers, and dates. In a column store, I can apply specific compression algorithm to the data that's in that column. So date gets one algorithm, a monotone increasing key like a surrogate key you might have in a dimensional data warehouse, has a different encoding algorithm, etc.

This is sorting. How data gets retrieved is fundamentally different, another big point for row-storage databases at query time. I could say, "Tell me all the customers that bought a product in California, but I only want to know their last name."

If I have 20 different attributes, a row-storage database actually has to read all the attributes off of disk. The query engine eliminates the ones I didn’t ask for in the eventual results, but I've already incurred the penalty of the input-output (I/O). This has a huge impact when you think of things like call detail records in telecom which have a 144-some odd columns.

If I'm only asking against a column store database, "Give me all the people who have last names, who bought a product in California," I'm essentially asking the database to read two columns off disk, and that’s all that’s happening. My I/O factors are improved by an order of 10 or in the case of the CDR, 1 in 144.
The great question is what ends up being the business value.

Gardner: You can’t just go back and increase your I/O improvements in those relational environments by making it in-memory or cutting down on the distance between the data and the processing? That only gets you so far, and you can only throw hardware at it so much. So fundamentally, it’s all about the architecture.

Ralston: Absolutely correct. You've seen a lot of these -- I think one of the fun terms around this is "unnatural acts with data," as to how data gets either scattered or put into a cache or other things. Every time you introduce one of these mechanisms, you're putting another bottleneck between near real-time analytics and getting the data from a source system into a user’s hands for analytics. Think of a cache. If you’re going to cache, you’ve got to warm that cache up to get an effect.

If I'm streaming data in from a sensor, real-time location servers, or something like that, I don’t get a whole lot of value out of the cache to start until it gets warmed up. I totally agree with your point there, Dana, that it’s all about the architecture.

In short, in leveraging Vertica, the underlying architecture allows me to create a playfield, if you will, for business analysts. They don’t necessarily have to be data scientists to enjoy it and be able to relate things that have a business relationship between each other, but not necessarily one that’s reflected in the data model, for whatever reason.
Performance suffers

Obviously in a row storage architecture, and specifically within dimensional data warehouses, if there is no index between a pair of columns, your performance begins to suffer. Vertica creates no indexes and it’s self-indexing the data via sorting and encoding.

So if I have an end user who wants to analyze something that’s never been analyzed before, but has a semantic relationship between those items, I don’t have to re-architect the data storage for them to get information back at the speed of their decision.

Gardner: What about opening this up to some new types of data and/or giving your users the folks in the insurance company the opportunity to look to external types of queries and learn more about markets, where they can apply new insurance products and grow the top line?

Ralston: That's definitely part of our strategic plan. Right now, 100 percent of the data being leveraged at Infinity is structured. We're leveraging Vertica to manage all that structured data, but we have a plan to leverage Hadoop and the Vertica Hadoop connectors, based on what I'm seeing around HAVEn, the idea of being able to seamlessly structured, non-structured data from one point.
Then, I’ve delivered what my CIO is asking me in terms of data as a competitive advantage.

Insurance is an interesting business in that, as my product and pricing people look for the next great indicator of risk, we essentially get to ride a wave of that competitive advantage for as long a period of time as it takes us to report that new rate to a state. The state shares that with our competitors, and then our competitors have to see if they want to bake into their systems what we’ve just found.

So we can use Vertica as a competitive hammer, Vertica plus Hadoop to do things that our competitors aren’t able to do. Then, I’ve delivered what my CIO is asking me in terms of data as a competitive advantage.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, July 16, 2013

Hackett research points to big need for spot buying automation amid general B2B procurement efficiency drive

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP Company.

This latest BriefingsDirect podcast, from the recent 2013 Ariba LIVE Conference in Washington, D.C., explores the rapid adoption of better means for companies to conduct so-called spot buying -- a more ad-hoc and agile, yet managed, approach to buying products and services.

We'll examine new spot-buying research from The Hackett Group on the latest and greatest around agile procurement of low-volume purchases, and we'll learn how two companies are benefiting from making spot buying a new competency.

The panel consists of Kurt Albertson, Associate Principal Advisor at The Hackett Group in Atlanta; Ian Thomson, Koozoo’s Head of Business Development, based in San Francisco, and Cal Miller, Vice President of Business Development for Blue Marble Media in Atlanta. The interview is conducted Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Ariba, an SAP company, is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: How did we get to the need for tactical sourcing, and how did we actually begin dividing tactical and strategic sourcing at all?

Albertson: When you look at enterprises out there, our Key Issues Study for 2013 identified the top priorities area as profitability. So companies are continuing to focus on the profitability objective.

Customer satisfaction

The second slot was customer satisfaction, and you can view customer satisfaction as external customers, but also internal customers and the satisfaction around that.

Albertson
With that as the overlay in terms of the two most important objectives for the enterprise --  the third, by the way, is revenue growth -- let’s cascade down to why tactical sourcing or spot buying is important.

The importance comes from those two topics. Companies are continuing to drive profitability, which means continuing take out cost. Most mature organizations have very robust and mature strategic-sourcing processes in place. They've hired very seasoned category managers to run those processes and they want them focused on the most valuable categories of spend, where you want to align your most strategic assets.

On the other side of that equation, you have this transactional stuff. Someone puts through a purchase order, where procurement has very little involvement. The requisitioners make the decision on what to buy and they go out and get pricing. Purchasing’s role is to issue a purchase order, and there is no kind of category management or expense management practice in place.

That’s been the traditional approach by organizations, this two-tiered approach to procurement. The issue, however, comes when you have your category managers trying to get involved in spend where it’s not necessarily strategic, but you still want some level of spend management applied to it. So you've got these very seasoned resources focused on categories of spend that aren’t necessarily where they can add the biggest bang for the buck.
It's putting in place a better model to support that type of spend, so your category managers can go off and do what you hired them to do.

That’s what caused this phenomenon around spot buy, or tactical buy, taking this middle ground of spend, which our research shows is about 43 percent of spend on average. More importantly, more than sometimes half the transactional activity comes through it. So it's putting in place a better model to support that type of spend, so your category managers can go off and do what you hired them to do.

Gardner: And that 43 percent, does that cut across large companies as well as smaller ones?

Albertson: The 43 percent is an average, and there are going to be variances in that, depending on the industry, spend profile, and scale of the company, as you noted. Companies need to look at their spend, get the spend analytics in place to understand what they're buying to nail down the value proposition around this.

Smaller companies generally aren't going to have the maturity in place in terms of managing their spend. They're not going to have the category-manager capabilities in place. In all likelihood, they could be handling a much higher percentage of their spend through a more transactional nature. So for them, the opportunity might even be greater.

Cycle time

When we think about the reasons for doing spot buying, profitability was one reason, but customer service was the other, and customer service translates into cycle time.

That’s usually the issue with this type of spend. You can’t afford to have a category manager take it through a strategic sourcing process, which can take anywhere from six to 30 weeks.

People need this tomorrow. They need it in a week, and so you need a mechanism in place to focus on shorter cycle times and meet the needs of the customers. If you can’t do that, they're just going to bypass procurement, go do their own thing, and apply no rigor of spend management against that.
If we think about the reasons for doing this, profitability was one, but customer service was the other, and customer service translates into cycle time.

It's a common misperception that of that 43 percent of influence spend that we would consider tactical, it's all emergency buys. A lot of it isn’t necessarily emergency buys. It’s just that a large percentage of that is more category-specific types of purchases, but companies just don’t have the preferred suppliers or the category expertise in place to go out, identify suppliers, and manage that spend. It falls under the standard levels that companies might have for sending something through strategic sourcing.

Gardner: Let’s go to some organizations that are grappling with these issues. First, Koozoo. Ian, tell us a little bit about Koozoo and how spot buying plays a role in your life.

Thomson: Koozoo is a technology startup based in San Francisco. We're venture-backed and we've made it very easy to share your view using an existing device. You take an old mobile phone, and we can convert that, using our software application, into a live-stream webcam.

Thomson
In terms of efficiency, we're like many organizations, but as a start-up, in particular, we're resource constrained. I'm also the procurement manager, as it turns out. It’s not in my job title, but we needed to find something fast. We were launching a product and we needed something to support it.

It wasn’t a catalog item, and it wasn’t something I could find on Amazon. So looked for some suppliers online and found somebody that could meet our need within two weeks, which was super important, as we were looking at a launch date.

More developed need

I had gone to Alibaba and I looked at what Alibaba’s competitors were. Ariba Discovery came up as one of them. So that’s pretty much how I ran into it.
I think I "spot buyed" Ariba in order to spot buy. I tested Alibaba, and to be fair, it was not a very clean approach. I got a lot of messy inbound input and responses when I asked for what I thought was a relatively simple request.
There were things that weren’t meeting my needs. The communication wasn’t very easy on Alibaba, maybe because of the international nature of the would-be suppliers.

Gardner: Let’s go to Cal Miller at Blue Marble Media. First, Cal, tell us a bit about Blue Marble and why this nature of buying is important for you?

Miller: Blue Marble is a very small company, but we develop high profile video, film, motion graphics, and animation. We came to be involved with Ariba about three years ago. We were selected as a supplier to help them with a marketing project. The relationship grew, and as we learned more about Ariba, someone said, "You guys need to be on the Discovery Network program." We did, and it was a very wise decision, very fortunate.

Miller
Gardner: Are you using the spot buying and Discovery as a way of buying goods or allowing others to buy your goods in that spot-buying mode or both?

Miller: Our involvement is almost totally as a seller. In our business, at least half of our clients are in a spot-buy scenario. It’s not something they do every month or even every year. We have even Fortune 500 companies that will say they need to do this series of videos and haven’t done it for three years. So whoever gets assigned to start that project it is a spot buy, and we're hopeful that they'll find us and then we get that opportunity. So spot buying is a real strategy for us and for developing our revenue.

Gardner: You found therefore a channel in Ariba through which people who are in this ad-hoc need to execute quickly, but not with a lot of organization and history to it, can find you. How did that compare to other methods that you would typically use to be found?

Miller: Actually, there is very little comparison. The batting average, if you will, is excellent. The quality of people who are coming out to say, "We would like to meet you" is outstanding. Most generally, it’s a C-level contact. What we find is the interaction allows for a real relationship-development process. So even if we don’t get that particular opportunity, we're secure as one of their shortlisted go-to people, and that’s worth everything.

Gardner: Kurt Albertson, when you listen to both a buyer and a seller, it seems to me that there is a huge untapped potential for organizing and managing spot buying in the market.

Finding new customers

Albertson: Listen to Cal talk about Blue Marble’s experience. Certainly from a business development perspective, it’s another tool that I'm sure Cal appreciates in terms of going out and finding new customers.

Listening to Ian talk about it from the buy side is interesting. You have users like Ian who don’t have a mature procurement organization in place, and this is a tool they're using to go out and drive their procurement process.

But then, on the other end of that scale, you do have large global companies as well. As I talked about, these large global companies who haven’t done a good job of managing what we would consider tactical spend, which again is about 43 percent of what’s influenced.

For them, while they have built out very robust procurement organizations to manage the more strategic spend, it’s this 43 percent of influence spend that’s sub-optimized. So it’s more of an evolution of their procurement strategy to start putting in place the capabilities to address that chunk of spend that’s been sub-optimized.
There is a very strong business case for going out and putting in place the capabilities to address the spend.

Gardner: Tell us a bit more about your research. Were there any other findings that would benefit us, as we try to understand what spot buying is and why it should be important to more buyers and sellers?

Albertson: The first question that everyone generally tends to ask when trying to build out a new type of capability is what’s the return on that. Why would we do this? We have already talked about the issue of longer cycle times that occur, if you try to manage the spend through a traditional kind of procurement process and the dissatisfaction that causes. But the other option is to just let the requesters do what they want, and you don’t drive any kind of spend management practices around it.

When we look at the numbers, Dana, typically going through a traditional strategic sourcing process with highly skilled category managers, on average you'll drive just over 6 percent savings on that spend. Whereas, if you put in place more of a tactical spot-buy type process, the savings you will drive is less,  4.3 percent on average, according to our research.

So there's a little bit of a delta there by putting it through a more formal process. But the important thing is that if you look at the return, you're obviously not spending as much time and you're not having as mature resources and as experienced resources having to support that spend. So the investment is less. The return on investment that you get from a tactical process, as opposed to the more strategic process, is actually higher.

There is a very strong business case for going out and putting in place the capabilities to address the spend. That’s the question that most organizations will ask -- what is the return on the investment?

Gardner: Are all the procurement providers, service providers jumping on this? Is Ariba in front of the game in any way?

Process challenges

Albertson: There are some challenges with this process, and if you look at Ariba, they evolved from the front end of the sourcing process,  built out capabilities to support that, and have a lot of maturity in that space.

The other thing that they have built out is the networked community. If you look at tactical buying and spot buying, both of those are extremely important. First of all, you want a front-end ERFx process that you can quickly enable, can quickly go out in a standard methodology, and go to the market with standard requirements.

But the other component of that is that you need to have this network of a whole bunch of suppliers out there that you can then send that to. That’s where Ariba’s strength is in the fact that they have built out a very large network, the largest network out there for suppliers and buyers to interact.

And that’s really the most significant advantage that Ariba has in this space -- that network of buyers and suppliers, so they can very quickly go out and implement a supplier discovery type of execution and identify particular suppliers.

We may call this tactical spend, but it’s still important to the people who are going out within the companies and looking for what they're trying to get, a product or service. There needs to be a level of due diligence against these suppliers. There needs to be a level of trust. Compare that to doing a Google search and going out there and just finding suppliers. The Ariba Network provides that additional level of comfort and trust and prequalification of suppliers to participate in this process.
For the larger organizations, the bigger bang for the buck for them is going after and getting control over the strategic spend.

You're going to find companies coming at it from both ends. The smaller, less mature organizations from a procurement perspective are going to come at it from a primary buying and sourcing channel, whereas for the larger organizations, the bigger bang for the buck for them is going after and getting control over the strategic spend.
Again, we're in an environment right now, particularly for the larger organizations, where everyone is trying to continue to evolve the value proposition. Strategic category managers are moving into supply-relationship management, innovation, and how do they collaborate with suppliers to drive innovation.
s
We all know that across the G&A function, including procurement, there are not the significant investments of resources being made. So the only way they are going to be able to do that is extract themselves out of this kind of tactical activity and build out a different type of capability internally, including leveraging solutions like Ariba and the Supplier Discovery capability to go out and help facilitate that buy so that those category managers can continue to evolve the value that they provide to the business.

Cloud model

Gardner: It seems that the cloud model really suits this spot-buying and tactical-buying approach very well. You log on, the network can grow rapidly, and buyers and sellers can participate in this networked economy. Is this something that wouldn’t have happened 5 or 10 years ago, when we only looked at on-premise systems? Is the cloud a factor in why spot buying works now?

Albertson: Obviously, one of the drivers of this is how quickly can you get up to speed and start leveraging the technology and enabling the spot-buy tactical sourcing capabilities that you're building.
One of the drivers of this is how quickly can you get up to speed and start leveraging the technology.

Then on the supply end, one of the driving forces is to enable as many suppliers and as many participants into this environment. That is going to be one of the key factors that determines success in this area, and certainly a software-as-a-service (SaaS) model works better for accomplishing that than an on-premise model does.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP Company.

You may also be interested in:

Friday, July 12, 2013

The Open Group conference emphasizes healthcare as key sector for ecosystem-wide interactions improvement

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

This latest BriefingsDirect discussion, leading into The Open Group Conference on July 15 in Philadelphia, brings together a panel of experts to explore how new IT trends are empowering improvements, specifically in the area of healthcare. We'll learn how healthcare industry organizations are seeking large-scale transformation and what are some of the paths they're taking to realize that.

We'll see how improved cross-organizational collaboration and such trends as big data and cloud computing are helping to make healthcare more responsive and efficient.

The panel: Jason Uppal, Chief Architect and Acting CEO at clinicalMessage; Larry Schmidt, Chief Technologist at HP for the Health and Life Sciences Industries, and Jim Hietala, Vice President of Security at The Open Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

This special BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference, which is focused on enterprise transformation in the finance, government, and healthcare sectors. Registration to the conference remains open. Follow the conference on Twitter at #ogPHL. [Disclosure: The Open Group and HP are sponsors of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Let’s take a look at this very interesting and dynamic healthcare sector. What, in particular, is so special about healthcare and why do things like enterprise architecture and allowing for better interoperability and communication across organizational boundaries seem to be so relevant here?

Hietala: There’s general acknowledgement in the industry that, inside of healthcare and inside the healthcare ecosystem, information either doesn’t flow well or it only flows at a great cost in terms of custom integration projects and things like that.

Fertile ground

From The Open Group’s perspective, it seems that the healthcare industry and the ecosystem really is fertile ground for bringing to bear some of the enterprise architecture concepts that we work with at The Open Group in order to improve, not only how information flows, but ultimately, how patient care occurs.

Gardner: Larry Schmidt, similar question to you. What are some of the unique challenges that are facing the healthcare community as they try to improve on responsiveness, efficiency, and greater capabilities?

Schmidt: There are several things that have not really kept up with what technology is able to do today.

For example, the whole concept of personal observation comes into play in what we would call "value chains" that exist right now between a patient and a doctor. We look at things like mobile technologies and want to be able to leverage that to provide additional observation of an individual, so that the doctor can make a more complete diagnosis of some sickness or possibly some medication that a person is on.

We want to be able to see that observation in real life, as opposed to having to take that in at the office, which typically winds up happening. I don’t know about everybody else, but every time I go see my doctor, oftentimes I get what’s called white coat syndrome. My blood pressure will go up. But that’s not giving the doctor an accurate reading from the standpoint of providing great observations.

Technology has advanced to the point where we can do that in real time using mobile and other technologies, yet the communication flow, that information flow, doesn't exist today, or is at best, not easily communicated between doctor and patient.
There are plenty of places that additional collaboration and communication can improve the whole healthcare delivery model.

If you look at the ecosystem, as Jim offered, there are plenty of places that additional collaboration and communication can improve the whole healthcare delivery model.

That’s what we're about. We want to be able to find the places where the technology has advanced, where standards don’t exist today, and just fuel the idea of building common communication methods between those stakeholders and entities, allowing us to then further the flow of good information across the healthcare delivery model.

Gardner: Jason Uppal, let’s think about what, in addition to technology, architecture, and methodologies can bring to bear here? Is there also a lag in terms of process thinking in healthcare, as well as perhaps technology adoption?

Uppal: I'm going to refer to a presentation that I watched from a very well-known surgeon from Harvard, Dr. Atul Gawande. His point was is that, in the last 50 years, the medical industry has made great strides in identifying diseases, drugs, procedures, and therapies, but one thing that he was alluding to was that medicine forgot the cost, that everything is cost.

At what price?

Today, in his view, we can cure a lot of diseases and lot of issues, but at what price? Can anybody actually afford it?

Uppal
His view is that if healthcare is going to change and improve, it has to be outside of the medical industry. The tools that we have are better today, like collaborative tools that are available for us to use, and those are the ones that he was recommending that we need to explore further.

That is where enterprise architecture is a powerful methodology to use and say, "Let’s take a look at it from a holistic point of view of all the stakeholders. See what their information needs are. Get that information to them in real time and let them make the right decisions."

Therefore, there is no reason for the health information to be stuck in organizations. It could go with where the patient and providers are, and let them make the best decision, based on the best practices that are available to them, as opposed to having siloed information.

So enterprise-architecture methods are most suited for developing a very collaborative environment. Dr. Gawande was pointing out that, if healthcare is going to improve, it has to think about it not as medicine, but as healthcare delivery.
There are definitely complexities that occur based on the different insurance models and how healthcare is delivered across and between countries.

Gardner: And it seems that not only are there challenges in terms of technology adoption and even operating more like an efficient business in some ways. We also have very different climates from country to country, jurisdiction to jurisdiction. There are regulations, compliance, and so forth.

Going back to you, Larry, how important of an issue is that? How complex does it get because we have such different approaches to healthcare and insurance from country to country?

Schmidt: There are definitely complexities that occur based on the different insurance models and how healthcare is delivered across and between countries, but some of the basic and fundamental activities in the past that happened as a result of delivering healthcare are consistent across countries.

As Jason has offered, enterprise architecture can provide us the means to explore what the art of the possible might be today. It could allow us the opportunity to see how innovation can occur if we enable better communication flow between the stakeholders that exist with any healthcare delivery model in order to give us the opportunity to improve the overall population.

After all, that’s what this is all about. We want to be able to enable a collaborative model throughout the stakeholders to improve the overall health of the population. I think that’s pretty consistent across any country that we might work in.

Ongoing work

Gardner: Jim Hietala, maybe you could help us better understand what’s going on within The Open Group and, even more specifically, at the conference in Philadelphia. There is the Population Health Working Group and there is work towards a vision of enabling the boundaryless information flow between the stakeholders. Any other information and detail you could offer would be great. [Registration to the conference remains open. Follow the conference on Twitter at #ogPHL.]

Hietala: On Tuesday of the conference, we have a healthcare focus day. The keynote that morning will be given by Dr. David Nash, Dean of the Jefferson School of Population Health. He'll give what’s sure to be a pretty interesting presentation, followed by a reactors' panel, where we've invited folks from different stakeholder constituencies.

Hietala
We're are going to have clinicians there. We're going to have some IT folks and some actual patients to give their reaction to Dr. Nash’s presentation. We think that will be an interesting and entertaining panel discussion.

The balance of the day, in terms of the healthcare content, we have a workshop. Larry Schmidt is giving one of the presentations there, and Jason and myself and some other folks from our working group are involved in helping to facilitate and carry out the workshop.

The goal of it is to look into healthcare challenges, desired outcomes, the extended healthcare enterprise, and the extended healthcare IT enterprise and really gather those pain points that are out there around things like interoperability to surface those and develop a work program coming out of this.
We want to be able to enable a collaborative model throughout the stakeholders to improve the overall health of the population.

So we expect it to be an interesting day if you are in the healthcare IT field or just the healthcare field generally, it would definitely be a day well spent to check it out.

Gardner: Larry, you're going to be talking on Tuesday. Without giving too much away, maybe you can help us understand the emphasis that you're taking, the area that you're going to be exploring.

Schmidt: I've titled the presentation "Remixing Healthcare through Enterprise Architecture." Jason offered some thoughts as to why we want to leverage enterprise architecture to discipline healthcare. My thoughts are that we want to be able to make sure we understand how the collaborative model would work in healthcare, taking into consideration all the constituents and stakeholders that exist within the complete ecosystem of healthcare.

This is not just collaboration across the doctors, patients, and maybe the payers in a healthcare delivery model. This could be out as far as the drug companies and being able to get drug companies to a point where they can reorder their raw materials to produce new drugs in the case of an epidemic that might be occurring.


Real-time model

It would be a real-time model that allows us the opportunity to understand what's truly happening, both to an individual from a healthcare standpoint, as well as to a country or a region within a country and so on from healthcare. This remixing of enterprise architecture is the introduction to that concept of leveraging enterprise architecture into this collaborative model.

Then, I would like to talk about some of the technologies that I've had the opportunity to explore around what is available today in technology. I believe we need to have some type of standardized messaging or collaboration models to allow us to further facilitate the ability of that technology to provide the value of healthcare delivery or betterment of healthcare to individuals. I'll talk about that a little bit within my presentation and give some good examples.

It’s really interesting. I just traveled from my company’s home base back to my home base and I thought about something like a body scanner that you get into in the airport. I know we're in the process of eliminating some of those scanners now within the security model from the airports, but could that possibly be something that becomes an element within healthcare delivery? Every time your body is scanned, there's a possibility you can gather information about that, and allow that to become a part of your electronic medical record.
There is a lot of information available today that could be used in helping our population to be healthier.

Hopefully, that was forward thinking, but that kind of thinking is going to play into the art of the possible, with what we are going to be doing, both in this presentation and talking about that as part of the workshop.

Gardner: Larry, we've been having some other discussions with The Open Group around what they call Open Platform 3.0, which is the confluence of big data, mobile, cloud computing, and social.

One of the big issues today is this avalanche of data, the Internet of things, but also the Internet of people. It seems that the more work that's done to bring Open Platform 3.0 benefits to bear on business decisions, it could very well be impactful for centers and other data that comes from patients, regardless of where they are, to a medical establishment, regardless of where it is.

So do you think we're really on the cusp of a significant shift in how medicine is actually conducted?

Schmidt: I absolutely believe that. There is a lot of information available today that could be used in helping our population to be healthier. And it really isn't only the challenge of the communication model that we've been speaking about so far. It's also understanding the information that's available to us to take that and make that into knowledge to be applied in order to help improve the health of the population.

As we explore this from an as-is model in enterprise architecture to something that we believe we can first enable through a great collaboration model, through standardized messaging and things like that, I believe we're going to get into even deeper detail around how information can truly provide empowered decisions to physicians and individuals around their healthcare.

So it will carry forward into the big data and analytics challenges that we have talked about and currently are talking about with The Open Group.

Healthcare framework

Gardner: Jason Uppal, we've also seen how in other business sectors, industries have faced transformation and have needed to rely on something like enterprise architecture and a framework like TOGAF in order to manage that process and make it something that's standardized, understood, and repeatable.

It seems to me that healthcare can certainly use that, given the pace of change, but that the impact on healthcare could be quite a bit larger in terms of actual dollars. This is such a large part of the economy that even small incremental improvements can have dramatic effects when it comes to dollars and cents.

So is there a benefit to bringing enterprise architect to healthcare that is larger and greater than other sectors because of these economics and issues of scale?

Uppal: That's a great way to think about this thing. In other industries, applying enterprise architecture to do banking and insurance may be easily measured in terms of dollars and cents, but healthcare is a fundamentally different economy and industry.

It's not about dollars and cents. It's about people’s lives, and loved ones who are sick, who could very easily be treated, if they're caught in time and the right people are around the table at the right time. So this is more about human cost than dollars and cents. Dollars and cents are critical, but human cost is the larger play here.
Whatever systems and methods are developed, they have to work for everybody in the world.

Secondly, when we think about applying enterprise architecture to healthcare, we're not talking about just the U.S. population. We're talking about global population here. So whatever systems and methods are developed, they have to work for everybody in the world. If the U.S. economy can afford an expensive healthcare delivery, what about the countries that don't have the same kind of resources? Whatever methods and delivery mechanisms you develop have to work for everybody globally.

That's one of the thing that a methodology like TOGAF brings out and says to look at it from every stakeholder’s point of view, and unless you have dealt with every stakeholder’s concerns, you don't have an architecture, you have a system that's designed for that specific set of audience.

The cost is not this 18 percent of the gross domestic product in the U.S. that is representing healthcare. It's the human cost, which is many multitudes of that. That's is one of the areas where we could really start to think about how do we affect that part of the economy, not the 18 percent of it, but the larger part of the economy, to improve the health of the population, not only in the North America, but globally.

If that's the case, then what really will be the impact on our greater world economy is improving population health, and population health is probably becoming our biggest problem in our economy.

We'll be testing these methods at a greater international level, as opposed to just at an organization and industry level. This is a much larger challenge. A methodology like TOGAF is a proven and it could be stressed and tested to that level. This is a great opportunity for us to apply our tools and science to a problem that is larger than just dollars. It's about humans.

All "experts"

Gardner: Jim Hietala, in some ways, we're all experts on healthcare. When we're sick, we go for help and interact with a variety of different services to maintain our health and to improve our lifestyle. But in being experts, I guess that also means we are witnesses to some of the downside of an unconnected ecosystem of healthcare providers and payers.

One of the things I've noticed in that vein is that I have to deal with different organizations that don't seem to communicate well. If there's no central process organizer, it's really up to me as the patient to pull the lines together between the different services -- tests, clinical observations, diagnosis, back for results from tests, sharing the information, and so forth.

Have you done any studies or have anecdotal information about how that boundaryless information flow would be still relevant, even having more of a centralized repository that all the players could draw on, sort of a collaboration team resource of some sort? I know that’s worked in other industries. Is this not a perfect opportunity for that boundarylessness to be managed?

Hietala: I would say it is. We all have experiences with going to see a primary physician, maybe getting sent to a specialist, getting some tests done, and the boundaryless information that’s flowing tends to be on paper delivered by us as patients in all the cases.

So the opportunity to improve that situation is pretty obvious to anybody who's been in the healthcare system as a patient. I think it’s a great place to be doing work. There's a lot of money flowing to try and address this problem, at least here in the U.S. with the HITECH Act and some of the government spending around trying to improve healthcare.
We'll be testing these methods at a greater international level, as opposed to just at an organization and industry level.

You've got healthcare information exchanges that are starting to develop, and you have got lots of pain points for organizations in terms of trying to share information and not having standards that enable them to do it. It seems like an area that’s really a great opportunity area to bring lots of improvement.

Gardner: Let’s look for some examples of where this has been attempted and what the success brings about. I'll throw this out to anyone on the panel. Do you have any examples that you can point to, either named organizations or anecdotal use case scenarios, of a better organization, an architectural approach, leveraging IT efficiently and effectively, allowing data to flow, putting in processes that are repeatable, centralized, organized, and understood. How does that work out?

Uppal: I'll give you an example. One of the things that happens when a patient is admitted to hospital and in hospital is that hey get what's called a high-voltage care. There is staff around them 24x7. There are lots of people around, and every specialty that you can think of is available to them. So the patient, in about two or three days, starts to feel much better.

When that patient gets discharged, they get discharged to home most of the time. They go from very high-voltage care to next to no care. This is one of the areas where in one of the organizations we work with is able to discharge the patient and, instead of discharging them to the primary care doc, who may not receive any records from the hospital for several days, they get discharged to into a virtual team. So if the patient is at home, the virtual team is available to them through their mobile phone 24x7.

Connect with provider

If, at 3 o’clock in the morning, the patient doesn't feel right, instead of having to call an ambulance to go to hospital once again and get readmitted, they have a chance to connect with their care provider at that time and say, "This is what the issue is. What do you want me to do next? Is this normal for the medication that I am on, or this is something abnormal that is happening?"

When that information is available to that care provider who may not necessarily have been part of the care team when the patient was in the hospital, that quick readily available information is key for keeping that person at home, as opposed to being readmitted to the hospital.

We all know that the cost of being in a hospital is 10 times more than it is being at home. But there's also inconvenience and human suffering associated with being in a hospital, as opposed to being at home.

Those are some of the examples that we have, but they are very limited, because our current health ecosystem is a very organization specific, not  patient and provider specific. This is the area there is a huge room for opportunities for healthcare delivery, thinking about health information, not in the context of the organization where the patient is, as opposed to in a cloud, where it’s an association between the patient and provider and health information that’s there.
Extending that model will bring infinite value to not only reducing the cost, but improving the cost and quality of care.

In the past, we used to have emails that were within our four walls. All of a sudden, with Gmail and Yahoo Mail, we have email available to us anywhere. A similar thing could be happening for the healthcare record. This could be somewhere in the cloud’s eco setting, where it’s securely protected and used by only people who have granted access to it.

Those are some of the examples where extending that model will bring infinite value to not only reducing the cost, but improving the cost and quality of care.

Schmidt: Jason touched upon the home healthcare scenario and being able to provide touch points at home. Another place that we see evolving right now in the industry is the whole concept of mobile office space. Both countries, as well as rural places within countries that are developed, are actually getting rural hospitals and rural healthcare offices dropped in by helicopter to allow the people who live in those communities to have the opportunity to talk to a doctor via satellite technologies and so on.

The whole concept of a architecture around and being able to deal with an extension of what truly lines up being telemedicine is something that we're seeing today. It would be wonderful if we could point to things like standards that allow us to be able to facilitate both the communication protocols as well as the information flows in that type of setting.

Many corporations can jump on the bandwagon to help the rural communities get the healthcare information and capabilities that they need via the whole concept of telemedicine.

That’s another area where enterprise architecture has come into play. Now that we see examples of that working in the industry today, I am hoping that as part of this working group, we'll get to the point where we're able to facilitate that much better, enabling innovation to occur for multiple companies via some of the architecture or the architecture work we are planning on producing.

Single view

Gardner: It seems that we've come a long way on the business side in many industries of getting a single view of the customer, as it’s called, the customer relationship management, big data, spreading the analysis around among different data sources and types. This sounds like a perfect fit for a single view of the patient across their life, across their care spectrum, and then of course involving many different types of organizations. But the government also needs to have a role here.

Jim Hietala, at The Open Group Conference in Philadelphia, you're focusing on not only healthcare, but finance and government. Regarding the government and some of the agencies that you all have as members on some of your panels, how well do they perceive this need for enterprise architecture level abilities to be brought to this healthcare issue?

Hietala: We've seen encouraging signs from folks in government that are encouraging to us in bringing this work to the forefront. There is a recognition that there needs to be better data flowing throughout the extended healthcare IT ecosystem, and I think generally they are supportive of initiatives like this to make that happen.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.


You may also be interested in:

HP-fueled application delivery transformation pays ongoing dividends for McKesson

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series examines how McKesson Corp. accomplished a multi-year, pan-IT management transformation. We’ll learn how McKesson's performance journey, from 2005 to the present, has enabled it to better leverage an agile, hybrid cloud model.

The discussion comes from the recent HP Discover 2013 Conference in Las Vegas.

Andy Smith, Vice President of Applications Hosting Services at McKesson, joins host Dana Gardner, Principal Analyst at Interarbor Solutions, to explore how McKesson gained a standardized services orientation to gain agility in deploying its many active applications. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: It's hard to believe it's been a full year since we last spoke. What's changed in the last year in how McKesson had been progressing and maturing its applications delivery capabilities?

Smith: Probably one of the things that have changed in the last year is that our performance metrics have continued to improve. We're continuing to see a drop in the number of outages from the standardization and automation. The reliability of the systems has increased, the utilization of the systems has increased, and our system admin ratios have increased. So everything, all the key performance indicators (KPIs) are going in the right direction.

That allowed us to make the next shift, which was to focus on how can we do better at providing capabilities to our customers. How do we do it faster and better through provisioning, because now it's taking less time to do the support side of it.

Gardner: It's really interesting to me that a big part of all this is the provisioning aspect going from fewer manual processes and multiple points of touch to more self-provisioning. How has that worked out?

Smith: It's been very well received. We've been in production now roughly two-and-a-half months. Rather than delivering requests via business requests to add some compute capacity in an average of six months, we’re down to less than four days. I think we can get it down to less than 10 minutes by the time we hit the end of summer.

Well received

It's been a challenge to get people to think differently about their processes internal to IT that would allow us to do the automation, but it's been very well received.

Gardner: What were some of the hurdles in terms of trying to get standardized and creating that operating procedure that people could rally behind, self provision, and automate?

Smith: The first piece is just a change in culture. We believe we were customer-centric providers of services. What that really translated to was that we were customer-centric customized providers of services. So every request was a custom request. That resulted in slow delivery, but it also resulted in non-standardized solutions.

One of the most difficult things was getting the architects and engineers to think differently and to understand that standardization would actually be better for the customer. We could get it to them faster, more consistently, and more reliably, and on the back end, provide the support much more cheaply to get that mind shift.

But we were successful. I think everybody still likes to customize, but we haven't had to do that.

Gardner: Just for the edification of our listeners, tell us a bit about McKesson. You’re not just a small mom-and-pop shop.

Smith: No, I think we’re Fortune 14 now, with more than $122 billion in revenue and more than 43,500 employees. We focus specifically on healthcare, how to ensure that whatever is needed by  healthcare organizations is there when they need it.

Smith
That might be software systems that we write for providers. That could be claims processing that we do for providers. But, the biggest chunk of our business is supply chain, ensuring that the supplies, whether they be medical, surgical, or pharmaceutical, are in the hospital's and providers' hands as soon as they need them.

If a line of business needs to make an improvement in order to capture a need of a customer, with the old way of doing business, it would take me six months to get the computer on the floor. Then they could start their development. Now, you're down to less than a week and days. So they can start their development six months earlier, which really helps us be in a position to capture that new market faster. In turn, this also helps McKesson customers deliver critical healthcare solutions more rapidly to meet today's emerging healthcare needs and enable better health.

Gardner: And there are also some other factors in the market. There's even more talk now about cloud than last year, focusing on hybrid capabilities, where you can pick and choose how to deploy your apps. Then, there's the mobile factor.

Smith: We are recognizing that we have to build that next generation of applications. Part of that is the mobility piece of it, because we have to separate the physical application, the software-as-a-service (SaaS) application from the display device that the customer is going to use. It might be an Android, an iPhone,  or something else, a tablet.
We really have to separate that mobile portion from it, because that display device could be almost anything.

So we're recognizing the fact that for next-generation of product, we really have to separate that mobile portion from it, because that display device could be almost anything.

Gardner: We’re here at HP Discover. How have the HP products and services come together to help you not only tackle these technical issues, but to foster the right culture?

Smith: When we talked last year, we had a lot of the support tools in place from HP -- operations orchestration, server automation, monitoring tools -- but we were using them to do support better. What we're able to do from the provisioning side is leverage that capability and leverage those existing tools.

All we had to do is purchase one additional tool which is a Cloud Service Automation (CSA) that sits on top of our existing tools. So it was a very minor investment, and we were able to leverage all the support tools to do the provisioning side of the business. It was very practical for us and relatively quick.

Gardner: Of course, a big emphasis here at HP Discover is HP Converged Cloud and talking about these different hybrid models. How has the automation provisioning services orientation, and standardization put you in a place to be able to avail yourselves of some of these hybrid models and the efficiencies and speed that come with that? How do they tie together -- what you’ve done with applications now and what you can perhaps do with cloud?
From a technology standpoint, we know we can do it. We’ve done it in the labs.

Smith: We’ll be the first to admit that providing the services internally is not necessarily always the best. We may not be the cheapest and we may not be the most capable. By getting better at how we do provisioning and how we do our own internal cloud frees up resources, and those resources now can start thinking about how we work with an external provider.

That's a lot of concern for us right now, because there is that risk factor. Do you put your intellectual property (IP) out there? Do you put your patients’ medical records out there? How do you protect it? And so there are a lot of business rules and contracting issues that we have to get through.

From a technology standpoint, we know we can do it. We’ve done it in the labs. We’ve provisioned out to third-party providers. It all works from a technology standpoint with the tools we have. Now we have to get through the business issues.

On the same journey

It's fortunate, in some ways, that HP is on the same journey. We partner on a lot of these things. When we brought CSA in, it was one of the earlier releases, and now we’ve partnered with them through the Customer Advisory Boards (CABs) and other methods. They continue to enhance this to meet our needs, but also to meet their needs.

Gardner: Now that you've been on this journey from 2005, where do you see yourselves in a couple of years?

Smith: Because we’re in healthcare, very similar to banking, we've hit a point where we don't believe we can afford to be down anymore.

Instead of talking about three nines, four nines, or five nines, we're starting to talk about, how we ensure the machines are never down, even for planned maintenance. That's taking a different kind of infrastructure, but that’s also taking a different kind of application that can tolerate machines being taken offline, but continue to run.
That's where our eye is, trying to figure out how to change the environment to be constantly on.

That's where our eye is, trying to figure out how to change the environment to be constantly on.

If the application isn't smart enough to tolerate a piece of machine going down, then you have to redesign the application architecture. Our applications are going to have to scale out horizontally across the equipment as the peaks and valleys of the customer demands change through the day or through the week.
The current architecture doesn't scale horizontally. It scales up and down. So you end up with a really big box that’s not needed some times of the day. It would be better if we could spread the load out horizontally.

Gardner: So just to close out, we have to think about applications now in the context of where they are deployed, in a cloud spectrum or continuum of hybrid types of models. We also have to think about them being delivered out to a variety of different endpoints.

Different end points

What do you think you’ll need to be doing differently from an application-development, deployment, and standardization perspective in order to accomplish both that ability to deploy anywhere and be high performance, as well as also be out on a variety of different end points?

Smith: The reality is that part of our journey over the last several years has been to consolidate the environment, consolidate the data centers, and consolidate and virtualize the servers. That's been great from a customer cost standpoint and standardization standpoint.

But now, when you're starting to deliver that SaaS mobile kind of application, speed of response to the customer, the keystroke, the screen refresh, are really important. You can't do that from a central data center. You've got to be able to push some of the applications and data out to regional locations. We’re not going to build those regional locations. It's just not practical.

That's where we see bringing in these hybrid clouds. We’ll host the primary app, let's say, back in our corporate data center, but then the mobile piece, the customer experience piece, is going to be have to be hosted in data centers that are scattered throughout the country and are much physically much closer to where the customer is.
You’re going to really have to be watching the endpoints so you can see that customer experience.

Gardner: Of course, that’s going to require a different level of performance monitoring and management.

Smith: Exactly, because then you really have to monitor the application, not just the server at the back-end. You’ve got to be watching that performance to know whether you have a local ISP that’s come down, if you have got a local cloud that’s come down. You’re going to really have to be watching the endpoints so you can see that customer experience. So it is a different kind of application monitoring.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: