Thursday, September 26, 2013

Application development efficiencies drive Agile payoffs for healthcare tech provider TriZetto

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series highlights how healthcare technology provider TriZetto has been improving its development processes and modernizing its ability to speed the applications lifecycle process.

To learn more about how quality and Agile methods tools better support a lifecycle approach to software, we sat down with Rubina Ansari, Associate Vice President of Automation and Software Development Lifecycle Tools at TriZetto.

The discussion, which took place at the recent HP Discover 2013 Conference in Las Vegas, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Where you are in terms of moving to Agile processes?

Ansari: TriZetto currently is going through an evolution. We’re going through a structured waterfall-to-scaled-Agile methodology. As you mentioned, that's one of the innovative ways that we're looking at getting our releases out faster with better quality, and be able to respond to our customers. We realize that Agile, as a methodology, is the way to go when it comes to all those three things I just mentioned.

We're currently in the midst of evolving how we work. We’re going through a major transformation within our development centers throughout the country.

TriZetto is a healthcare software provider. We have the software for all areas of healthcare. Our mission is to integrate different healthcare systems to make sure our customers have seamless information. Over 50 percent of the American insured population goes through our software for their claims processing. So, we have a big market and we want to stay there.
Leaner and faster

Our software is very important to us, just as it is to our customers. We're always looking for ways of making sure we’re leaner, faster, and keeping up with our quality in order to keep up with all the healthcare changes that are happening.

Gardner: You've been working with HP Software and Application Lifecycle Management (ALM) products for some time. Tell us a little bit about what you have in place, and then let's learn a bit more about the Asset Manager capabilities that you're pioneering?

Ansari
Ansari: We've been using HP tools for our testing area, such as the QTP Products Performance Center and Quality Center. We’ve recently went ahead with ALM 11.5, it has a lot of cross-project abilities. As for agile, we're now using HP Agile Manager.

This has helped us move forward fairly quickly into scaled agile using HP Agile Manager, while integrating with our current HP tools. We wanted to make sure that our tools were integrated and that we didn’t lose that traceability and the effectiveness of having a single vendor to get all our data.

HP Agile Manager is very important to us. It's a software-as-a-service (SaaS) model, and it was very easy for us to implement within our company. There was no concept of installing, and the response that we get from HP has been very fast, as this is the first experience we’ve had with a SaaS deliverable from HP.
It's very lightweight, it's web-based SaaS and it integrates with their current tool suite.

They're following agile, so we get releases every three months. Actually, every few weeks, we get enhancements for defects we may find within their product. It's worked out very well. It's very lightweight, it's web-based SaaS and it integrates with their current tool suite, which was vital to us.

We have between 500 and 1,000 individuals that make up development teams throughout United States. For Agile Manager, the last time we checked, it was approximately 400. We're hoping to get up to 1,000 by end of this year, so that way everyone is using Agile Manager for all their agile/scrum teams and their backlogs and development.
Gardner: Do you have any sense of how much faster you're able to develop? What are the paybacks in terms of quality, traceability, and tracking defects? What's the payback from doing this in the way you have?

Working together

Ansari: We’ve seen some, but I think the most is yet to come in rolling this out. One of the things that Agile Manager promotes is collaboration and working together in a scrum team. Agile Manager, having the software work all around the agile processes, makes it very easy for us to roll an agile methodology.

This has helped us collaborate better between testers and developers, and we're finding those defects earlier, before they even happen. We’ll have more hard metrics around this as we roll this out further. One of the major reasons we went with HP Agile Manager is that it has very good integration with the development tools we use.

They integrate with several development tools, allowing our testers to be able to see what changes occurred, what piece of code has changed for each defect enhancement that the tester would be testing. So that tight integration with other development tools was a very pivotal factor in our decision of going forward with that HP Agile Manager.

Gardner: So Rubina, not only are you progressing from waterfall to agile and adopting more up-to-date tools, but you’ve made this leap to a SaaS-based delivery for this. If that's working out well as you’ve said, do you think this is going to lead to doing more with other SaaS tools and tests and capabilities and maybe even look at cloud platform as a service opportunity?
We're also looking at offering some of our products in a SaaS model. So we realize what's involved in it.

Ansari: Absolutely. This was our first experience and it is going very well. Of course, there were some learning curves and some learning pains. Being able to get these changes so quickly and not having it do it ourselves was kind of a mind shift change for us. We're reaping the benefits from it obviously, but we did have to have a little more scheduled conversations, release notes, and documentation about changes from HP.

We're not new to SaaS. We're also looking at offering some of our products in a SaaS model. So we realize what's involved in it. It was great to be on the receiving end of a SaaS product, knowing that TriZetto themselves are playing that space as well.

There's always so much more to improve. What we’re looking for is how to quickly respond to our customers. That means also integrating HP Service Manager and any other tools that may be part of this software testing lifecycle or part of our ability to release or offer something to our clients.
We'll continue doing this until there is no more space for efficiency. But, there are always places where we can be even more effective.

The technologies that we’re advancing toward as well will allow us to easily go into the mobile space once we plan and do that.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, September 23, 2013

Navicure gains IT capacity optimization and performance monitoring using VMware vCenter Operations Manager

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

The next VMworld innovator interview focuses on how a fast-growing healthcare claims company is gaining better control and optimization across its IT infrastructure. Learn how IT leaders at Navicure have been deploying a comprehensive monitoring and operational management approach.

To understand how they're taming IT complexity as they set the stage to adopt the latest in cloud-computing and virtualization infrastructure developments, join Donald Wilkins, Director of Information Technology at Navicure Inc. in Duluth, Georgia.

The discussion, which took place at the recent 2013 VMworld Conference in San Francisco, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why is your organization so focused on taming complexity?

Wilkins
Wilkins: At Navicure, we've been focused on scaling a fast-growing business. And if you incorporate very complex infrastructure, it becomes more difficult to scale it. So we're focused on technologies that are simple to implement, yet have a lot of upward availability of growth from the storage, the infrastructure, and the software we use. We do that in order to be able to scale that growth we needed to satisfy our business objectives.

Gardner: Tell us a little bit about Navicure, what you do, how is that you're growing, and why that's putting a burden on your IT systems.

Wilkins: Navicure has been around for about 12 years. We started the company in about 2001 and delivered the product to our customers in the late 2001-2002 time-frame. We've been growing very fast. We're adding 20 to 30 employees every year, and we're up to about 230 employees today.

We have approximately 50,000 physicians on our system. We're growing at a rate of 8,000 to 10,000 physicians a year, and it’s a healthy growth. We don't want to grow too fast, so as not to water down our products and services, but at the same time, we want to grow at a pace that better enables us to deliver better products for our customers.

Revenue cycle management

Claim clearinghouses have been around for a couple of decades now. We've evolved from that claim-clearinghouse model to what we refer to as revenue cycle management. We pioneered that term early as we started the company.

We take the transactions from physicians and send them to the insurance companies. That’s what the clearinghouse model is. But on that product, we added a lot of value-added services, a lot analytics around those transactions to help the provider generate more revenue for their transactions. They get paid faster, and that they get paid the first time through the system.

It was very costly for transactions to be delayed weeks because of poorly submitted transactions to the insurance company or denials because they coded something wrong.

We try to catch all that, so that they get paid the first time through. That’s the return on investment (ROI) that our customers are looking for when they look at our products, to lower the AR days and to increase their revenue at the bottom line.

Customer service is one of the foundation cornerstones of our business. We feel that our customers are number one, and retaining those customers is one of our primary goals. 
We wanted to build a foundational structure that we can just build on as we get go into business and growing the transaction volume.

Gardner: Tell us a little bit about your IT environment.

Wilkins: The first thing we did at Navicure, when we started the company, is we looked at and decided that we didn't want to be in the data-center business. We wanted to use a colo that does that work at a much higher level than we could ever do. We wanted to focus on our product and let the colo focus on what they do.

They serve us from our infrastructure standpoint, and then we can focus on our products and build a good product. With that, we adopted very early on, the grid approach or the rack approach. This means that we wanted to build a foundational structure that we can just build on as we get go into business and grow the transactions volume.

That terminology has changed over the years, and that can be referred to a software-defined infrastructure today, but back then it was that we wanted to build infrastructure that would have a grid approach to it, so we could plug in more modules and components to add to scale out as we scale up.

With that, we continued to evolve what we do, but that inherent structure is still there. We need to be able to scale our business as our transactional volume doubles approximately every two years.

Gardner: And how did you begin your path to virtualization, and how did that progress into this more of a software-defined environment?

Ramping up fast

Wilkins: In the first few years of the operation of the company, we really had enough headroom in our infrastructure that it wasn't a big issue, but as we got four years into the company, we started realizing that we were going to hit a point where we would have to start ramping-up really fast.

Consolidation was not something that we had to worry about, because we didn’t have a lot to consolidate. It was a very early product, and we had to build the customer base. We had to build our reputation in the industry, and we did that. But then we started adding physicians by the thousands to our system every year.

With that, we started to have to add infrastructure. Virtualization came along at such a time that we could add it virtually faster and more efficiently than we could ever have if we added physical infrastructure.

So it became a product that we put in a test, dev, and production all at the same time, but it was something that just allowed us to meet the demands of the business.
We want to evolve that to be more proactive in our approach to monitoring.

Gardner: Of course, as many organizations have used virtualization to their benefit, they've also recognized that there is some complexity involved. And getting better management means further optimization, which further reduces costs. That also, of course, maintains their performance requirements. How did you then focus in on managing and optimizing this over time?

Wilkins: Well, one of the things we tried to look at, when we look at products and services, was to keep it simple. I have a very limited staff, and the staff needs to be able to drive to the point of whatever issue they're researching and/or inspecting.

As we've added technologies and services, we tried to add those that are very simple to scale, very, very simple to operate. We look at all these different tools to make that happen. This has led us to new products like VMware as they have also tried to drive to the same level, trying to simplify their product offering with their new products.

For years, we've been doing monitoring with other tools that were network-based monitoring tools. Those drive only so much value. They give us things like up-time alerting and responsiveness that are just about when issues happen. We want to evolve that to be more proactive in our approach to monitoring.
It’s not so much about how we can fix a problem when there is one. It’s more of, let’s keep the problem from happening to start with. That's where we've looked at some products for that. Recently we've actually implemented vCenter Operations Manager.

That product gives us a different twist that other SMNP monitoring tools do. It's a history of what's going on, but also a future analysis of that history and how it will change, based on our historical trends.

New line-up

Gardner: Of course, here at VMworld, we're hearing vSphere improvements and upgrades, but also the arrival of VMware vCloud Suite 5.5 and VMware vSphere with Operations Management 5.5. Is there anything in the new line-up that is particularly of interest to you, and have you had a chance to look at over?

Wilkins: I haven’t had a chance to look over the most recent offering, but we're running the current version. Again, for us, it's the efficiency mechanism inside the product that drives the most value for us to make sure that we can budget a year in advance of the expanding infrastructure that we need to have to meet the demands.

vCenter Operations Manager is key to understanding your infrastructure. If you don’t have it today, you're going to be very reactive to some of your pains and the troubles you're dealing with.

That product, while it does allow you to do a lot of research for various problems and services to drill down from the cluster level, down into the virtual machine levels and find out where your problems and pain points or, actually allows you to more quickly isolate the issue. At the same time, it allows you to project where you're growing and where you need to put your money into resources, whether that's more storage, compute resources, or network resources.

That's where we're seeing value out of the product, because it allows me to go during budget cycles to say that looking at infrastructure and our current growth, we will be out of resources by this time. We need to add this much, based on our current growth. Barring additional new products and services we may be coming up with, we may be adding to our service, if we don't do anything today. We're growing at this pace and here's the numbers to prove it.

When you have that information in front of you, you can actually build a business case around that that further educates the CFOs and the finance people to understanding what your troubles are and what you have to deal with on a day-to-day basis to operate the business.

Gardner: What sort of paybacks are there when you do this right?

Wilkins: Just being able to drive more density in our colo by being virtualized is a big value for us. Our footprint is relatively small. As for an actual dollar amount, it’s hard to pin something on there. We're growing so fast, we're trying to keep up with the demand, and we've been meeting that and exceeding that.
Desktop virtualization is going to be a critical component for that.

Really, the ROI is that our customers aren’t experiencing major troubles with our infrastructure not expanding fast enough. That's our goal, to drive high availability for infrastructure and low downtime, and we can do that with VMware and with their products and service.

We're a current customer of Site Recovery Manager. That's a staple in our virtual infrastructure and has been since 2008. We've been using that product for many years. It drives all of the planning and the testing of our virtual disaster recovery (DR) plan. I've been a very big proponent of that product and services for years, and we couldn’t do without it.

There are other products we will be looking at. Desktop virtualization is something that will be incorporated into the infrastructure in the next year or two.

As a small business, the value of that becomes a little harder to prove from a dollar standpoint. Some of those features like remote working come into play as office space continues to be expensive. It's something we will be looking at to expand our operations, especially as we have more remote employees working. Desktop virtualization is going to be a critical component for that.

Gardner: How about some 20/20 hindsight. If there were other folks that were ramping up on virtualization, or getting to the point where complexity was becoming an issue for them, do you have any thoughts on getting started or lessons learned that you could share?

Trusted partner

Wilkins: The best thing with virtualization is to get a trusted partner to help you get over the hurdle of the technical issues that may bring themselves to light.

I had a very trusted partner when I started this in 2005-2006. They actually just sat with me and worked with me, with no compensation whatsoever, to help work through virtualization. They made it such an easy value that it just became, "I've got to do this, because there's no way I can sustain this level of operational expense and of monitoring and managing this infrastructure, if it's all physical."

So, seeing that value proposition from a partner is key, but it has to be a trusted partner. It has to be a partner that has your best interest in mind, and not so much a new product to sell. It’s going to be somebody that brings a lot to the table, but, at the same time, helps you help yourself and lets you learn these products, so that you can actually implement it and research it on your own to see what value you can bring into the company.
It has to be a partner that has your best interest in mind, and not so much a new product to sell.

It’s easy for somebody to tell you how you can make your life better, but you have to actually see it, because then, you become a passionate person for the technology, and then you become a person that realizes you have to do this and will do whatever it takes to get this in here, because it will make your life easier.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

IT technology trends -- a risky business?


This guest post comes courtesy of Patty Donovan, Vice President, Membership & Events, at The Open Group and a member of its executive management team.

By Patty Donovan

On Wednesday, September 25, The Open Group will host a tweet jam looking at a multitude of emerging/converging technology trends and the risks they present to organizations who have already adopted or are looking to adopt them. Most of the technology concepts we’re talking about – Cloud, Big Data, BYOD/BYOS, the Internet of Things, etc – are not new, but organizations are at differing stages of implementation and do not yet fully understand the longer term impact of adoption.

Donovan
This tweet jam will allow us to explore some of these technologies in more detail and look at how organizations may better prepare against potential risks – whether this is in regards to security, access management, policies, privacy or ROI. As discussed in our previous Open Platform 3.0 tweet jam, new technology trends present many opportunities but can also present business challenges if not managed effectively. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Please join us on Wednesday, September 25 at 9:00 a.m. PT/12:00 p.m. ET/5:00 p.m. BST for a tweet jam that will discuss and debate the issues around technology risks. A number of key areas will be addressed during the discussion including: Big Data, Cloud, Consumerization of IT, the Internet of Things and mobile and social computing with a focus on understanding the key risk priority areas organizations face and ways to mitigate them.

We welcome Open Group members and interested participants from all backgrounds to join the session and interact with our panel thought leaders led by David Lounsbury, CTO and Jim Hietala, VP of Security, from The Open Group. To access the discussion, please follow the #ogChat hashtag during the allotted discussion time.
This tweet jam will allow us to explore some of these technologies in more detail and look at how organizations may better prepare against potential risks
  • Do you feel prepared for the emergence/convergence of IT trends? – Cloud, Big Data, BYOD/BYOS, Internet of things
  • Where do you see risks in these technologies? – Cloud, Big Data, BYOD/BYOS, Internet of things
  • How does your organization monitor for, measure and manage risks from these technologies?
  • Which policies are best at dealing with security risks from technologies? Which are less effective?
  • Many new technologies move data out of the enterprise to user devices or cloud services. Can we manage these new risks? How?
  • What role do standards, best practices and regulations play in keeping up with risks from these & future technologies?
  • Aside from risks caused by individual trends, what is the impact of multiple technology trends converging (Platform 3.0)?
And for those of you who are unfamiliar with tweet jams, here is some background information:

What Is a Tweet Jam?

A tweet jam is a one hour “discussion” hosted on Twitter. The purpose of this tweet jam is to share knowledge and answer questions on emerging/converging technology trends and the risks they present. Each tweet jam is led by a moderator and a dedicated group of experts to keep the discussion flowing. The public (or anyone using Twitter interested in the topic) is encouraged to join the discussion.

Participation Guidance

Whether you’re a newbie or veteran Twitter user, here are a few tips to keep in mind:
  • Have your first #ogChat tweet be a self-introduction: name, affiliation, occupation.
  • Start all other tweets with the question number you’re responding to and the #ogChat hashtag.
    • Sample: “Big Data presents a large business opportunity, but it is not yet being managed effectively internally – who owns the big data function? #ogchat”
    • Please refrain from product or service promotions. The goal of a tweet jam is to encourage an exchange of knowledge and stimulate discussion.
    • While this is a professional get-together, we don’t have to be stiff! Informality will not be an issue!
    • A tweet jam is akin to a public forum, panel discussion or Town Hall meeting – let’s be focused and thoughtful.


If you have any questions prior to the event or would like to join as a participant, please direct them to Rob Checkal (rob.checkal at hotwirepr.com). We anticipate a lively chat and hope you will be able to join!

This guest post comes courtesy of Patty Donovan, Vice President, Membership & Events, at The Open Group and a member of its executive management team.

You may also be interested in:

Friday, September 20, 2013

Are you ready for the convergence of new disruptive technologies?

The following guest post comes courtesy of Dr. Chris Harding, Director of Interoperability and SOA at The Open Group.

By Chris Harding

The convergence of technical phenomena such as cloud, mobile and social computing, big data analysis, and the Internet of things that is being addressed by The Open Group’s Open Platform 3.0 Forum will transform the way that you use information technology. Are you ready? Take our survey at https://www.surveymonkey.com/s/convergent_tech

What the technology can do

Mobile and social computing are leading the way. Recently, the launch of new iPhone models and the announcement of the Twitter stock flotation were headline news, reflecting the importance that these technologies now have for business. For example, banks use mobile text messaging to alert customers to security issues. Retailers use social media to understand their markets and communicate with potential customers.

Harding
Other technologies are close behind. In Formula One motor racing, sensors monitor vehicle operation and feed real-time information to the support teams, leading to improved design, greater safety, and lower costs. This approach could soon become routine for cars on the public roads too. (Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Many exciting new applications are being discussed. Stores could use sensors to capture customer behavior while browsing the goods on display, and give them targeted information and advice via their mobile devices. Medical professionals could monitor hospital patients and receive alerts of significant changes. Researchers could use shared cloud services and big data analysis to detect patterns in this information, and develop treatments, including for complex or uncommon conditions that are hard to understand using traditional methods. The potential is massive, and we are only just beginning to see it.

What the analysts say

Market analysts agree on the importance of the new technologies.

Gartner uses the term “Nexus of Forces” to describe the convergence and mutual reinforcement of social, mobility, cloud and information patterns that drive new business scenarios, and says that, although these forces are innovative and disruptive on their own, together they are revolutionizing business and society, disrupting old business models and creating new leaders.

IDC predicts that a combination of social cloud, mobile, and big data technologies will drive around 90% of all the growth in the IT market through 2020, and uses the term “third platform” to describe this combination.

The Open Group will identify the standards that will make Gartner’s Nexus of Forces and IDC’s Third Platform commercial realities. This will be the definition of Open Platform 3.0.

Disrupting enterprise use of IT

The new technologies are bringing new opportunities, but their use raises problems. In particular, end users find that working through IT departments in the traditional way is not satisfactory. The delays are too great for rapid, innovative development. They want to use the new technologies directly – “hands on."

Increasingly, business departments are buying technology directly, by-passing their IT departments. Traditionally, the bulk of an enterprise’s IT budget was spent by the IT department and went on maintenance. A significant proportion is now spent by the business departments, on new technology.
Business analysts are increasingly using technical tools, and even doing application development, using exposed APIs.

Business and IT are not different worlds any more. Business analysts are increasingly using technical tools, and even doing application development, using exposed APIs. For example, marketing folk do search engine optimization, use business information tools, and analyze traffic on Twitter. Such operations require less IT skill than formerly because the new systems are easy to use. Also, users are becoming more IT-savvy. This is a revolution in business use of IT, comparable to the use of spreadsheets in the 1980s.

Also, business departments are hiring traditional application developers, who would once have only been found in IT departments.

Are you ready?

These disruptive new technologies are changing, not just the IT architecture, but also the business architecture of the enterprises that use them. This is a sea change that affects us all.

The introduction of the PC had a dramatic impact on the way enterprises used IT, taking much of the technology out of the computer room and into the office. The new revolution is taking it out of the office and into the pocket. Cell phones and tablets give you windows into the world, not just your personal collection of applications and information. Through those windows you can see your friends, your best route home, what your customers like, how well your production processes are working, or whatever else you need to conduct your life and business.
You must learn how to tailor and combine the information and services available to you, to meet your personal objectives.

This will change the way you work. You must learn how to tailor and combine the information and services available to you, to meet your personal objectives. If your role is to provide or help to provide IT services, you must learn how to support users working in this new way.

To negotiate this change successfully, and take advantage of it, each of us must understand what is happening, and how ready we are to deal with it.

The Open Group is conducting a survey of people’s reactions to the convergence of Cloud and other new technologies. Take the survey, to input your state of readiness, and get early sight of the results, to see how you compare with everyone else.

To take the survey, visit https://www.surveymonkey.com/s/convergent_tech

The following guest post comes courtesy of Dr. Chris Harding, Director of Interoperability and SOA at The Open Group.

You may also be interested in:

Thursday, September 19, 2013

How MZI HealthCare identifies big data patient productivity gems using HP Vertica

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series details how a healthcare solutions provider leverages big-data capabilities.

Learn how MZI Healthcare has deployed the HP Vertica Analytics Platform to help their customers better understand population healthcare trends and identify how well healthcare processes are working.

To discover more about how high-performance and cost-effective big-data processing forms a foundational element to improving overall healthcare quality and efficiency, join Greg Gootee, Product Manager at MZI Healthcare, based in Orlando. The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
 
Here are some excerpts:
Gardner: How important is big data turning out to be for how healthcare is being administered?

Gootee: Change in healthcare is really dramatic, maybe more dramatic than any other industry has ever been. If you look at other industries where they have actually been able to spread that change over time, in healthcare it's being rapidly accelerated.

Gootee
In the past, data had been stored in multiple systems and multiple areas on given patients. It's been difficult for providers and organizations to make informed decisions about that patient and their healthcare. So we see a lot of change in being able to bring that data together and understand it better.

Gardner: Tell us about MZI, what you do, who your customers are, and where you're going to be taking this big data ability in the future.

Gootee: MZI Healthcare has predominantly been working on the payer side. We have a product that's been on the market for over 25 years helping with benefit administration and the lines of payers and different independent physician associations (IPAs) and third-party administrators (TPAs).

Our customers have always had a very tough time bringing in data from different sources. A little over two years ago, MZI decided to look at how we could leverage that data to help our customers better understand their risk and their patients, and ultimately change the outcomes for those patients.

Predictive analysis

Gardner: I think that's how the newer regulatory environment is lining up in terms of compensation. This is about outcomes, rather than procedures. Tell us about your requirements for big data in order to start doing more of that predictive analysis.

Gootee: If you think about how data has been stored in the past for patients across their continuum of care, where, as they went from facility to facility, and physician to physician, it's really been so spread apart. It's been difficult to help understand even how the treatments are affecting that patient.

I've talked a lot about my aunt in previous interviews. Last year, she went into a coma, not because the doctors weren't doing the right thing, but because they were unable to understand what the other doctors were doing.

She went to many specialists and took medication from each one of those to help with her given problem, but what happened was there was an interaction with medication. They didn't even know if she’d come out of the coma.

These things happen every day. Doctors make informed decisions from their experience and the data that they have. So it's critical that they can actually see all the information that's available to them.

When we look at healthcare and how it's changing, for example the Affordable Care Act, one of the main focuses is obviously cost. We all know that healthcare is growing at a rate that's just unsustainable, and while that's the main focus, it's different this time.
Not only are we trying to reduce cost, but we are trying to increase the care that's given to those patients.

We've done that before. In the Clinton Administration we had a kind of HMO and it really made a dramatic difference on cost. It was working, but it didn't give people a choice. There was no basis on outcomes, and the quality of care wasn't there.

This time around, that's probably the major difference. Not only are we trying to reduce cost, but we are trying to increase the care that's given to those patients. That's really vital to making the healthcare system a better system throughout the United States.

Gardner: Given the size of the data, the disparate nature of the data, more-and-more human data will be brought to bear. What were your technical requirements, and what was the journey that you took in finding the right infrastructure?

Gootee: We had a couple of requirements that were critical. When we work with small- and medium-size organizations (SMBs), they really don't have the funds to put in a large system themselves. So our goal was that we wanted to do something similar to what Apple has done with the iPhone. We wanted to take multiple things, put them into one area, and reduce that price point for our customers.

One of the critical things that we wanted to look at was overall price point. That included how we manage those systems and, when we looked at Vertica, one of the things that we found very appealing was that the management of that system is minimal.

High-end analytics

The other critical thing was speed, being able to deliver high-end analytics at the point of care, instead of two or three months later, and Vertica really produced. In fact, we did a proof of concept with them. It was almost unbelievable some of the queries that ran and the speed at which that data came back to us.

You hear things like that and see it through the conference, no matter what volume you may have. It's very good. Those were some of our requirements, and we were able to put that in the cloud. We run in the Amazon cloud and we were able to deliver that content to the people that need it at the right time at a really low price point.

Gardner: Let me understand also the requirement for concurrency. If you have this posted on Amazon Web Services, you're then opening this up to many different organizations and many different queriers. Is there an issue for the volume of queries happening simultaneously, or concurrency? Has that been something you've been able to work through?

Gootee: That's another value add that we get. The ability to expand and scale the Vertica system along with the scalability that we get with the Amazon allows us to deliver that information. No matter what type of queries we're getting, we can expand that automatically. We can grow that need, and it really makes a large difference in how we could be competitive in the marketplace.

Gardner: I suppose another dynamic to this on the economic side is the predictability of your cost.
Cloud services take some of that unknown away. It lets you scale as you need it and scale back if you don't need it.

Gootee: If you look at traditional ways that we've delivered software or a content before, you always over-buy, because you don’t know what it's going to be. Then, at some point, you don't have enough resources to deliver. Cloud services take some of that unknown away. It lets you scale as you need it and scale back if you don't need it.

So it's the flexibility for us. We're not a large company, and what's exciting about this is that these technologies help us do the same thing that the big guys do. It really lets our small company compete in a larger marketplace.

Gardner: Going to the population health equation and the types of data and information, is this something that's of interest to you? How important is this ability to get at all the information in all the different formats as you move forward?

Gootee: That's very critical for us. The way we interact in America and around the world has changed a lot. The HP HAVEn platform provides us with some opportunities to improve on what we have with healthcare's big security concerns, and the issue of the mobility of data. Getting it anywhere is critical to us, as well as better understanding how that data is changing.

We've heard from a lot of companies here that really are driving that user experience. More-and-more companies are going to be competing on how they can deliver things to a user in the way that they like it. That's critical to us, and that [HP] platform really gives us the ability to do that.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: